00:00:00.001 Started by upstream project "autotest-per-patch" build number 126180 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.179 > git --version # 'git version 2.39.2' 00:00:00.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.206 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.206 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.781 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.792 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.805 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.805 > git config core.sparsecheckout # timeout=10 00:00:04.817 > git read-tree -mu HEAD # timeout=10 00:00:04.834 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.858 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.859 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.944 [Pipeline] Start of Pipeline 00:00:04.958 [Pipeline] library 00:00:04.959 Loading library shm_lib@master 00:00:04.959 Library shm_lib@master is cached. Copying from home. 00:00:04.976 [Pipeline] node 00:00:04.983 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.988 [Pipeline] { 00:00:04.998 [Pipeline] catchError 00:00:05.000 [Pipeline] { 00:00:05.010 [Pipeline] wrap 00:00:05.021 [Pipeline] { 00:00:05.030 [Pipeline] stage 00:00:05.032 [Pipeline] { (Prologue) 00:00:05.207 [Pipeline] sh 00:00:05.507 + logger -p user.info -t JENKINS-CI 00:00:05.526 [Pipeline] echo 00:00:05.528 Node: GP11 00:00:05.535 [Pipeline] sh 00:00:05.830 [Pipeline] setCustomBuildProperty 00:00:05.840 [Pipeline] echo 00:00:05.841 Cleanup processes 00:00:05.845 [Pipeline] sh 00:00:06.123 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.124 3618364 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.138 [Pipeline] sh 00:00:06.420 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.420 ++ grep -v 'sudo pgrep' 00:00:06.420 ++ awk '{print $1}' 00:00:06.420 + sudo kill -9 00:00:06.420 + true 00:00:06.436 [Pipeline] cleanWs 00:00:06.446 [WS-CLEANUP] Deleting project workspace... 00:00:06.446 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.452 [WS-CLEANUP] done 00:00:06.456 [Pipeline] setCustomBuildProperty 00:00:06.471 [Pipeline] sh 00:00:06.777 + sudo git config --global --replace-all safe.directory '*' 00:00:06.875 [Pipeline] httpRequest 00:00:06.896 [Pipeline] echo 00:00:06.898 Sorcerer 10.211.164.101 is alive 00:00:06.906 [Pipeline] httpRequest 00:00:06.911 HttpMethod: GET 00:00:06.911 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.912 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.930 Response Code: HTTP/1.1 200 OK 00:00:06.930 Success: Status code 200 is in the accepted range: 200,404 00:00:06.930 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.674 [Pipeline] sh 00:00:10.958 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.973 [Pipeline] httpRequest 00:00:10.997 [Pipeline] echo 00:00:10.999 Sorcerer 10.211.164.101 is alive 00:00:11.007 [Pipeline] httpRequest 00:00:11.011 HttpMethod: GET 00:00:11.012 URL: http://10.211.164.101/packages/spdk_417133c034a8e98fe5679a5537477ef7ae63cff8.tar.gz 00:00:11.013 Sending request to url: http://10.211.164.101/packages/spdk_417133c034a8e98fe5679a5537477ef7ae63cff8.tar.gz 00:00:11.016 Response Code: HTTP/1.1 200 OK 00:00:11.017 Success: Status code 200 is in the accepted range: 200,404 00:00:11.017 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_417133c034a8e98fe5679a5537477ef7ae63cff8.tar.gz 00:00:27.559 [Pipeline] sh 00:00:27.846 + tar --no-same-owner -xf spdk_417133c034a8e98fe5679a5537477ef7ae63cff8.tar.gz 00:00:30.390 [Pipeline] sh 00:00:30.672 + git -C spdk log --oneline -n5 00:00:30.672 417133c03 bdevperf: Send RPC error resp. if no jobs started 00:00:30.672 cde5ad515 bdevperf: send test results in RPC response 00:00:30.672 2728651ee accel: adjust task per ch define name 00:00:30.672 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:30.672 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:30.683 [Pipeline] } 00:00:30.695 [Pipeline] // stage 00:00:30.702 [Pipeline] stage 00:00:30.704 [Pipeline] { (Prepare) 00:00:30.718 [Pipeline] writeFile 00:00:30.732 [Pipeline] sh 00:00:31.012 + logger -p user.info -t JENKINS-CI 00:00:31.023 [Pipeline] sh 00:00:31.301 + logger -p user.info -t JENKINS-CI 00:00:31.314 [Pipeline] sh 00:00:31.637 + cat autorun-spdk.conf 00:00:31.637 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.637 SPDK_TEST_NVMF=1 00:00:31.637 SPDK_TEST_NVME_CLI=1 00:00:31.637 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.637 SPDK_TEST_NVMF_NICS=e810 00:00:31.637 SPDK_TEST_VFIOUSER=1 00:00:31.637 SPDK_RUN_UBSAN=1 00:00:31.637 NET_TYPE=phy 00:00:31.644 RUN_NIGHTLY=0 00:00:31.648 [Pipeline] readFile 00:00:31.666 [Pipeline] withEnv 00:00:31.667 [Pipeline] { 00:00:31.678 [Pipeline] sh 00:00:31.959 + set -ex 00:00:31.959 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:31.959 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:31.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.959 ++ SPDK_TEST_NVMF=1 00:00:31.959 ++ SPDK_TEST_NVME_CLI=1 00:00:31.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.959 ++ SPDK_TEST_NVMF_NICS=e810 00:00:31.959 ++ SPDK_TEST_VFIOUSER=1 00:00:31.959 ++ SPDK_RUN_UBSAN=1 00:00:31.959 ++ NET_TYPE=phy 00:00:31.959 ++ RUN_NIGHTLY=0 00:00:31.959 + case $SPDK_TEST_NVMF_NICS in 00:00:31.959 + DRIVERS=ice 00:00:31.959 + [[ tcp == \r\d\m\a ]] 00:00:31.959 + [[ -n ice ]] 00:00:31.959 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:31.959 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:31.959 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:31.959 rmmod: ERROR: Module irdma is not currently loaded 00:00:31.959 rmmod: ERROR: Module i40iw is not currently loaded 00:00:31.959 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:31.959 + true 00:00:31.959 + for D in $DRIVERS 00:00:31.959 + sudo modprobe ice 00:00:31.959 + exit 0 00:00:31.968 [Pipeline] } 00:00:31.984 [Pipeline] // withEnv 00:00:31.989 [Pipeline] } 00:00:32.008 [Pipeline] // stage 00:00:32.018 [Pipeline] catchError 00:00:32.020 [Pipeline] { 00:00:32.036 [Pipeline] timeout 00:00:32.036 Timeout set to expire in 50 min 00:00:32.037 [Pipeline] { 00:00:32.051 [Pipeline] stage 00:00:32.053 [Pipeline] { (Tests) 00:00:32.068 [Pipeline] sh 00:00:32.353 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.353 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.353 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.353 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:32.353 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.353 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.353 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:32.353 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.353 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.353 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.353 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:32.353 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.353 + source /etc/os-release 00:00:32.353 ++ NAME='Fedora Linux' 00:00:32.353 ++ VERSION='38 (Cloud Edition)' 00:00:32.353 ++ ID=fedora 00:00:32.353 ++ VERSION_ID=38 00:00:32.353 ++ VERSION_CODENAME= 00:00:32.353 ++ PLATFORM_ID=platform:f38 00:00:32.353 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:32.353 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.353 ++ LOGO=fedora-logo-icon 00:00:32.353 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:32.353 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.353 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:32.353 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.353 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.353 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.353 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:32.353 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.353 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:32.353 ++ SUPPORT_END=2024-05-14 00:00:32.353 ++ VARIANT='Cloud Edition' 00:00:32.353 ++ VARIANT_ID=cloud 00:00:32.353 + uname -a 00:00:32.353 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:32.353 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:33.312 Hugepages 00:00:33.312 node hugesize free / total 00:00:33.312 node0 1048576kB 0 / 0 00:00:33.312 node0 2048kB 0 / 0 00:00:33.312 node1 1048576kB 0 / 0 00:00:33.312 node1 2048kB 0 / 0 00:00:33.312 00:00:33.312 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.312 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:33.312 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:33.312 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:33.312 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:33.312 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:33.312 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:33.312 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:33.312 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:33.312 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:33.571 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:33.571 + rm -f /tmp/spdk-ld-path 00:00:33.571 + source autorun-spdk.conf 00:00:33.571 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.571 ++ SPDK_TEST_NVMF=1 00:00:33.571 ++ SPDK_TEST_NVME_CLI=1 00:00:33.571 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.571 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.571 ++ SPDK_TEST_VFIOUSER=1 00:00:33.571 ++ SPDK_RUN_UBSAN=1 00:00:33.571 ++ NET_TYPE=phy 00:00:33.571 ++ RUN_NIGHTLY=0 00:00:33.571 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.571 + [[ -n '' ]] 00:00:33.571 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.571 + for M in /var/spdk/build-*-manifest.txt 00:00:33.571 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.571 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.571 + for M in /var/spdk/build-*-manifest.txt 00:00:33.571 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.571 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.571 ++ uname 00:00:33.571 + [[ Linux == \L\i\n\u\x ]] 00:00:33.571 + sudo dmesg -T 00:00:33.571 + sudo dmesg --clear 00:00:33.571 + dmesg_pid=3619037 00:00:33.571 + [[ Fedora Linux == FreeBSD ]] 00:00:33.571 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.571 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.571 + sudo dmesg -Tw 00:00:33.571 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.571 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.571 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.571 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.571 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.571 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.571 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.571 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.571 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.571 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.571 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.571 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.571 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.571 Test configuration: 00:00:33.571 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.571 SPDK_TEST_NVMF=1 00:00:33.571 SPDK_TEST_NVME_CLI=1 00:00:33.571 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.571 SPDK_TEST_NVMF_NICS=e810 00:00:33.571 SPDK_TEST_VFIOUSER=1 00:00:33.571 SPDK_RUN_UBSAN=1 00:00:33.571 NET_TYPE=phy 00:00:33.571 RUN_NIGHTLY=0 12:52:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:33.571 12:52:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.571 12:52:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.571 12:52:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.571 12:52:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.571 12:52:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.571 12:52:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.571 12:52:55 -- paths/export.sh@5 -- $ export PATH 00:00:33.571 12:52:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.571 12:52:55 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:33.571 12:52:55 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:33.571 12:52:55 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721040775.XXXXXX 00:00:33.571 12:52:55 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721040775.R07Bjd 00:00:33.571 12:52:55 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:33.571 12:52:55 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:33.571 12:52:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:33.571 12:52:55 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.571 12:52:55 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.571 12:52:55 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:33.571 12:52:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:33.571 12:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.571 12:52:55 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.571 12:52:55 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:33.571 12:52:55 -- pm/common@17 -- $ local monitor 00:00:33.571 12:52:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.571 12:52:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.571 12:52:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.571 12:52:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.571 12:52:55 -- pm/common@21 -- $ date +%s 00:00:33.571 12:52:55 -- pm/common@21 -- $ date +%s 00:00:33.571 12:52:55 -- pm/common@25 -- $ sleep 1 00:00:33.571 12:52:55 -- pm/common@21 -- $ date +%s 00:00:33.571 12:52:55 -- pm/common@21 -- $ date +%s 00:00:33.571 12:52:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040775 00:00:33.571 12:52:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040775 00:00:33.571 12:52:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040775 00:00:33.571 12:52:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040775 00:00:33.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040775_collect-vmstat.pm.log 00:00:33.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040775_collect-cpu-load.pm.log 00:00:33.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040775_collect-cpu-temp.pm.log 00:00:33.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040775_collect-bmc-pm.bmc.pm.log 00:00:34.955 12:52:56 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:34.955 12:52:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.955 12:52:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.955 12:52:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.955 12:52:56 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.955 Mon Jul 15 10:52:56 AM UTC 2024 00:00:34.955 12:52:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.955 v24.09-pre-208-g417133c03 00:00:34.955 12:52:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.955 12:52:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.955 12:52:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.955 12:52:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:34.955 12:52:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:34.955 12:52:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.955 ************************************ 00:00:34.955 START TEST ubsan 00:00:34.955 ************************************ 00:00:34.955 12:52:56 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:34.955 using ubsan 00:00:34.955 00:00:34.955 real 0m0.000s 00:00:34.955 user 0m0.000s 00:00:34.955 sys 0m0.000s 00:00:34.955 12:52:56 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:34.955 12:52:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.955 ************************************ 00:00:34.955 END TEST ubsan 00:00:34.955 ************************************ 00:00:34.955 12:52:56 -- common/autotest_common.sh@1142 -- $ return 0 00:00:34.955 12:52:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.955 12:52:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.955 12:52:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.955 12:52:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:34.955 12:52:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:34.955 12:52:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:34.955 12:52:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:34.955 12:52:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:34.955 12:52:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:34.955 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:34.955 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:34.955 Using 'verbs' RDMA provider 00:00:45.871 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:55.842 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:55.842 Creating mk/config.mk...done. 00:00:55.842 Creating mk/cc.flags.mk...done. 00:00:55.842 Type 'make' to build. 00:00:55.842 12:53:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:55.842 12:53:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:55.842 12:53:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:55.842 12:53:16 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.842 ************************************ 00:00:55.842 START TEST make 00:00:55.842 ************************************ 00:00:55.842 12:53:16 make -- common/autotest_common.sh@1123 -- $ make -j48 00:00:55.842 make[1]: Nothing to be done for 'all'. 00:00:57.233 The Meson build system 00:00:57.233 Version: 1.3.1 00:00:57.233 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:57.233 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:57.233 Build type: native build 00:00:57.233 Project name: libvfio-user 00:00:57.233 Project version: 0.0.1 00:00:57.233 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:57.233 C linker for the host machine: cc ld.bfd 2.39-16 00:00:57.233 Host machine cpu family: x86_64 00:00:57.233 Host machine cpu: x86_64 00:00:57.233 Run-time dependency threads found: YES 00:00:57.233 Library dl found: YES 00:00:57.233 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:57.233 Run-time dependency json-c found: YES 0.17 00:00:57.233 Run-time dependency cmocka found: YES 1.1.7 00:00:57.233 Program pytest-3 found: NO 00:00:57.233 Program flake8 found: NO 00:00:57.233 Program misspell-fixer found: NO 00:00:57.233 Program restructuredtext-lint found: NO 00:00:57.233 Program valgrind found: YES (/usr/bin/valgrind) 00:00:57.233 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:57.233 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:57.233 Compiler for C supports arguments -Wwrite-strings: YES 00:00:57.233 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.233 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:57.233 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:57.233 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:57.233 Build targets in project: 8 00:00:57.233 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:57.233 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:57.233 00:00:57.233 libvfio-user 0.0.1 00:00:57.233 00:00:57.233 User defined options 00:00:57.233 buildtype : debug 00:00:57.233 default_library: shared 00:00:57.233 libdir : /usr/local/lib 00:00:57.233 00:00:57.233 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:57.816 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:58.078 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:58.078 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:58.078 [3/37] Compiling C object samples/null.p/null.c.o 00:00:58.078 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:58.078 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:58.078 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:58.078 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:58.078 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:58.078 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:58.078 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:58.078 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:58.078 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:58.078 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:58.078 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:58.078 [15/37] Compiling C object samples/server.p/server.c.o 00:00:58.078 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:58.078 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:58.078 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:58.078 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:58.078 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:58.078 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:58.078 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:58.078 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:58.078 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:58.337 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:58.337 [26/37] Compiling C object samples/client.p/client.c.o 00:00:58.337 [27/37] Linking target samples/client 00:00:58.337 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:58.337 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:58.337 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:58.337 [31/37] Linking target test/unit_tests 00:00:58.599 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:58.599 [33/37] Linking target samples/server 00:00:58.599 [34/37] Linking target samples/lspci 00:00:58.599 [35/37] Linking target samples/null 00:00:58.599 [36/37] Linking target samples/shadow_ioeventfd_server 00:00:58.599 [37/37] Linking target samples/gpio-pci-idio-16 00:00:58.599 INFO: autodetecting backend as ninja 00:00:58.599 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:58.864 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:59.441 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:59.441 ninja: no work to do. 00:01:04.720 The Meson build system 00:01:04.720 Version: 1.3.1 00:01:04.720 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:04.720 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:04.720 Build type: native build 00:01:04.720 Program cat found: YES (/usr/bin/cat) 00:01:04.720 Project name: DPDK 00:01:04.720 Project version: 24.03.0 00:01:04.720 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:04.720 C linker for the host machine: cc ld.bfd 2.39-16 00:01:04.720 Host machine cpu family: x86_64 00:01:04.720 Host machine cpu: x86_64 00:01:04.720 Message: ## Building in Developer Mode ## 00:01:04.720 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:04.720 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:04.720 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:04.720 Program python3 found: YES (/usr/bin/python3) 00:01:04.720 Program cat found: YES (/usr/bin/cat) 00:01:04.720 Compiler for C supports arguments -march=native: YES 00:01:04.720 Checking for size of "void *" : 8 00:01:04.720 Checking for size of "void *" : 8 (cached) 00:01:04.720 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:04.720 Library m found: YES 00:01:04.720 Library numa found: YES 00:01:04.720 Has header "numaif.h" : YES 00:01:04.720 Library fdt found: NO 00:01:04.720 Library execinfo found: NO 00:01:04.720 Has header "execinfo.h" : YES 00:01:04.720 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:04.720 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:04.720 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:04.720 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:04.720 Run-time dependency openssl found: YES 3.0.9 00:01:04.720 Run-time dependency libpcap found: YES 1.10.4 00:01:04.720 Has header "pcap.h" with dependency libpcap: YES 00:01:04.720 Compiler for C supports arguments -Wcast-qual: YES 00:01:04.720 Compiler for C supports arguments -Wdeprecated: YES 00:01:04.720 Compiler for C supports arguments -Wformat: YES 00:01:04.720 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:04.720 Compiler for C supports arguments -Wformat-security: NO 00:01:04.720 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:04.720 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:04.720 Compiler for C supports arguments -Wnested-externs: YES 00:01:04.720 Compiler for C supports arguments -Wold-style-definition: YES 00:01:04.720 Compiler for C supports arguments -Wpointer-arith: YES 00:01:04.720 Compiler for C supports arguments -Wsign-compare: YES 00:01:04.720 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:04.720 Compiler for C supports arguments -Wundef: YES 00:01:04.720 Compiler for C supports arguments -Wwrite-strings: YES 00:01:04.720 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:04.720 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:04.720 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:04.720 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:04.720 Program objdump found: YES (/usr/bin/objdump) 00:01:04.720 Compiler for C supports arguments -mavx512f: YES 00:01:04.720 Checking if "AVX512 checking" compiles: YES 00:01:04.720 Fetching value of define "__SSE4_2__" : 1 00:01:04.720 Fetching value of define "__AES__" : 1 00:01:04.720 Fetching value of define "__AVX__" : 1 00:01:04.720 Fetching value of define "__AVX2__" : (undefined) 00:01:04.720 Fetching value of define "__AVX512BW__" : (undefined) 00:01:04.720 Fetching value of define "__AVX512CD__" : (undefined) 00:01:04.720 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:04.720 Fetching value of define "__AVX512F__" : (undefined) 00:01:04.720 Fetching value of define "__AVX512VL__" : (undefined) 00:01:04.720 Fetching value of define "__PCLMUL__" : 1 00:01:04.720 Fetching value of define "__RDRND__" : 1 00:01:04.720 Fetching value of define "__RDSEED__" : (undefined) 00:01:04.720 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:04.720 Fetching value of define "__znver1__" : (undefined) 00:01:04.720 Fetching value of define "__znver2__" : (undefined) 00:01:04.720 Fetching value of define "__znver3__" : (undefined) 00:01:04.720 Fetching value of define "__znver4__" : (undefined) 00:01:04.720 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:04.720 Message: lib/log: Defining dependency "log" 00:01:04.720 Message: lib/kvargs: Defining dependency "kvargs" 00:01:04.720 Message: lib/telemetry: Defining dependency "telemetry" 00:01:04.720 Checking for function "getentropy" : NO 00:01:04.720 Message: lib/eal: Defining dependency "eal" 00:01:04.720 Message: lib/ring: Defining dependency "ring" 00:01:04.720 Message: lib/rcu: Defining dependency "rcu" 00:01:04.720 Message: lib/mempool: Defining dependency "mempool" 00:01:04.720 Message: lib/mbuf: Defining dependency "mbuf" 00:01:04.720 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:04.720 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:04.720 Compiler for C supports arguments -mpclmul: YES 00:01:04.720 Compiler for C supports arguments -maes: YES 00:01:04.720 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:04.720 Compiler for C supports arguments -mavx512bw: YES 00:01:04.720 Compiler for C supports arguments -mavx512dq: YES 00:01:04.720 Compiler for C supports arguments -mavx512vl: YES 00:01:04.720 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:04.720 Compiler for C supports arguments -mavx2: YES 00:01:04.720 Compiler for C supports arguments -mavx: YES 00:01:04.720 Message: lib/net: Defining dependency "net" 00:01:04.720 Message: lib/meter: Defining dependency "meter" 00:01:04.720 Message: lib/ethdev: Defining dependency "ethdev" 00:01:04.720 Message: lib/pci: Defining dependency "pci" 00:01:04.720 Message: lib/cmdline: Defining dependency "cmdline" 00:01:04.720 Message: lib/hash: Defining dependency "hash" 00:01:04.720 Message: lib/timer: Defining dependency "timer" 00:01:04.720 Message: lib/compressdev: Defining dependency "compressdev" 00:01:04.720 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:04.720 Message: lib/dmadev: Defining dependency "dmadev" 00:01:04.720 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:04.720 Message: lib/power: Defining dependency "power" 00:01:04.720 Message: lib/reorder: Defining dependency "reorder" 00:01:04.720 Message: lib/security: Defining dependency "security" 00:01:04.720 Has header "linux/userfaultfd.h" : YES 00:01:04.720 Has header "linux/vduse.h" : YES 00:01:04.720 Message: lib/vhost: Defining dependency "vhost" 00:01:04.720 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:04.720 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:04.720 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:04.720 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:04.720 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:04.720 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:04.720 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:04.720 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:04.720 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:04.720 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:04.720 Program doxygen found: YES (/usr/bin/doxygen) 00:01:04.720 Configuring doxy-api-html.conf using configuration 00:01:04.720 Configuring doxy-api-man.conf using configuration 00:01:04.720 Program mandb found: YES (/usr/bin/mandb) 00:01:04.720 Program sphinx-build found: NO 00:01:04.720 Configuring rte_build_config.h using configuration 00:01:04.720 Message: 00:01:04.720 ================= 00:01:04.720 Applications Enabled 00:01:04.720 ================= 00:01:04.720 00:01:04.720 apps: 00:01:04.720 00:01:04.720 00:01:04.720 Message: 00:01:04.720 ================= 00:01:04.720 Libraries Enabled 00:01:04.720 ================= 00:01:04.720 00:01:04.720 libs: 00:01:04.720 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:04.720 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:04.720 cryptodev, dmadev, power, reorder, security, vhost, 00:01:04.721 00:01:04.721 Message: 00:01:04.721 =============== 00:01:04.721 Drivers Enabled 00:01:04.721 =============== 00:01:04.721 00:01:04.721 common: 00:01:04.721 00:01:04.721 bus: 00:01:04.721 pci, vdev, 00:01:04.721 mempool: 00:01:04.721 ring, 00:01:04.721 dma: 00:01:04.721 00:01:04.721 net: 00:01:04.721 00:01:04.721 crypto: 00:01:04.721 00:01:04.721 compress: 00:01:04.721 00:01:04.721 vdpa: 00:01:04.721 00:01:04.721 00:01:04.721 Message: 00:01:04.721 ================= 00:01:04.721 Content Skipped 00:01:04.721 ================= 00:01:04.721 00:01:04.721 apps: 00:01:04.721 dumpcap: explicitly disabled via build config 00:01:04.721 graph: explicitly disabled via build config 00:01:04.721 pdump: explicitly disabled via build config 00:01:04.721 proc-info: explicitly disabled via build config 00:01:04.721 test-acl: explicitly disabled via build config 00:01:04.721 test-bbdev: explicitly disabled via build config 00:01:04.721 test-cmdline: explicitly disabled via build config 00:01:04.721 test-compress-perf: explicitly disabled via build config 00:01:04.721 test-crypto-perf: explicitly disabled via build config 00:01:04.721 test-dma-perf: explicitly disabled via build config 00:01:04.721 test-eventdev: explicitly disabled via build config 00:01:04.721 test-fib: explicitly disabled via build config 00:01:04.721 test-flow-perf: explicitly disabled via build config 00:01:04.721 test-gpudev: explicitly disabled via build config 00:01:04.721 test-mldev: explicitly disabled via build config 00:01:04.721 test-pipeline: explicitly disabled via build config 00:01:04.721 test-pmd: explicitly disabled via build config 00:01:04.721 test-regex: explicitly disabled via build config 00:01:04.721 test-sad: explicitly disabled via build config 00:01:04.721 test-security-perf: explicitly disabled via build config 00:01:04.721 00:01:04.721 libs: 00:01:04.721 argparse: explicitly disabled via build config 00:01:04.721 metrics: explicitly disabled via build config 00:01:04.721 acl: explicitly disabled via build config 00:01:04.721 bbdev: explicitly disabled via build config 00:01:04.721 bitratestats: explicitly disabled via build config 00:01:04.721 bpf: explicitly disabled via build config 00:01:04.721 cfgfile: explicitly disabled via build config 00:01:04.721 distributor: explicitly disabled via build config 00:01:04.721 efd: explicitly disabled via build config 00:01:04.721 eventdev: explicitly disabled via build config 00:01:04.721 dispatcher: explicitly disabled via build config 00:01:04.721 gpudev: explicitly disabled via build config 00:01:04.721 gro: explicitly disabled via build config 00:01:04.721 gso: explicitly disabled via build config 00:01:04.721 ip_frag: explicitly disabled via build config 00:01:04.721 jobstats: explicitly disabled via build config 00:01:04.721 latencystats: explicitly disabled via build config 00:01:04.721 lpm: explicitly disabled via build config 00:01:04.721 member: explicitly disabled via build config 00:01:04.721 pcapng: explicitly disabled via build config 00:01:04.721 rawdev: explicitly disabled via build config 00:01:04.721 regexdev: explicitly disabled via build config 00:01:04.721 mldev: explicitly disabled via build config 00:01:04.721 rib: explicitly disabled via build config 00:01:04.721 sched: explicitly disabled via build config 00:01:04.721 stack: explicitly disabled via build config 00:01:04.721 ipsec: explicitly disabled via build config 00:01:04.721 pdcp: explicitly disabled via build config 00:01:04.721 fib: explicitly disabled via build config 00:01:04.721 port: explicitly disabled via build config 00:01:04.721 pdump: explicitly disabled via build config 00:01:04.721 table: explicitly disabled via build config 00:01:04.721 pipeline: explicitly disabled via build config 00:01:04.721 graph: explicitly disabled via build config 00:01:04.721 node: explicitly disabled via build config 00:01:04.721 00:01:04.721 drivers: 00:01:04.721 common/cpt: not in enabled drivers build config 00:01:04.721 common/dpaax: not in enabled drivers build config 00:01:04.721 common/iavf: not in enabled drivers build config 00:01:04.721 common/idpf: not in enabled drivers build config 00:01:04.721 common/ionic: not in enabled drivers build config 00:01:04.721 common/mvep: not in enabled drivers build config 00:01:04.721 common/octeontx: not in enabled drivers build config 00:01:04.721 bus/auxiliary: not in enabled drivers build config 00:01:04.721 bus/cdx: not in enabled drivers build config 00:01:04.721 bus/dpaa: not in enabled drivers build config 00:01:04.721 bus/fslmc: not in enabled drivers build config 00:01:04.721 bus/ifpga: not in enabled drivers build config 00:01:04.721 bus/platform: not in enabled drivers build config 00:01:04.721 bus/uacce: not in enabled drivers build config 00:01:04.721 bus/vmbus: not in enabled drivers build config 00:01:04.721 common/cnxk: not in enabled drivers build config 00:01:04.721 common/mlx5: not in enabled drivers build config 00:01:04.721 common/nfp: not in enabled drivers build config 00:01:04.721 common/nitrox: not in enabled drivers build config 00:01:04.721 common/qat: not in enabled drivers build config 00:01:04.721 common/sfc_efx: not in enabled drivers build config 00:01:04.721 mempool/bucket: not in enabled drivers build config 00:01:04.721 mempool/cnxk: not in enabled drivers build config 00:01:04.721 mempool/dpaa: not in enabled drivers build config 00:01:04.721 mempool/dpaa2: not in enabled drivers build config 00:01:04.721 mempool/octeontx: not in enabled drivers build config 00:01:04.721 mempool/stack: not in enabled drivers build config 00:01:04.721 dma/cnxk: not in enabled drivers build config 00:01:04.721 dma/dpaa: not in enabled drivers build config 00:01:04.721 dma/dpaa2: not in enabled drivers build config 00:01:04.721 dma/hisilicon: not in enabled drivers build config 00:01:04.721 dma/idxd: not in enabled drivers build config 00:01:04.721 dma/ioat: not in enabled drivers build config 00:01:04.721 dma/skeleton: not in enabled drivers build config 00:01:04.721 net/af_packet: not in enabled drivers build config 00:01:04.721 net/af_xdp: not in enabled drivers build config 00:01:04.721 net/ark: not in enabled drivers build config 00:01:04.721 net/atlantic: not in enabled drivers build config 00:01:04.721 net/avp: not in enabled drivers build config 00:01:04.721 net/axgbe: not in enabled drivers build config 00:01:04.721 net/bnx2x: not in enabled drivers build config 00:01:04.721 net/bnxt: not in enabled drivers build config 00:01:04.721 net/bonding: not in enabled drivers build config 00:01:04.721 net/cnxk: not in enabled drivers build config 00:01:04.721 net/cpfl: not in enabled drivers build config 00:01:04.721 net/cxgbe: not in enabled drivers build config 00:01:04.721 net/dpaa: not in enabled drivers build config 00:01:04.721 net/dpaa2: not in enabled drivers build config 00:01:04.721 net/e1000: not in enabled drivers build config 00:01:04.721 net/ena: not in enabled drivers build config 00:01:04.721 net/enetc: not in enabled drivers build config 00:01:04.721 net/enetfec: not in enabled drivers build config 00:01:04.721 net/enic: not in enabled drivers build config 00:01:04.721 net/failsafe: not in enabled drivers build config 00:01:04.721 net/fm10k: not in enabled drivers build config 00:01:04.721 net/gve: not in enabled drivers build config 00:01:04.721 net/hinic: not in enabled drivers build config 00:01:04.721 net/hns3: not in enabled drivers build config 00:01:04.721 net/i40e: not in enabled drivers build config 00:01:04.721 net/iavf: not in enabled drivers build config 00:01:04.721 net/ice: not in enabled drivers build config 00:01:04.721 net/idpf: not in enabled drivers build config 00:01:04.721 net/igc: not in enabled drivers build config 00:01:04.721 net/ionic: not in enabled drivers build config 00:01:04.721 net/ipn3ke: not in enabled drivers build config 00:01:04.721 net/ixgbe: not in enabled drivers build config 00:01:04.721 net/mana: not in enabled drivers build config 00:01:04.721 net/memif: not in enabled drivers build config 00:01:04.721 net/mlx4: not in enabled drivers build config 00:01:04.721 net/mlx5: not in enabled drivers build config 00:01:04.721 net/mvneta: not in enabled drivers build config 00:01:04.721 net/mvpp2: not in enabled drivers build config 00:01:04.721 net/netvsc: not in enabled drivers build config 00:01:04.721 net/nfb: not in enabled drivers build config 00:01:04.721 net/nfp: not in enabled drivers build config 00:01:04.721 net/ngbe: not in enabled drivers build config 00:01:04.721 net/null: not in enabled drivers build config 00:01:04.721 net/octeontx: not in enabled drivers build config 00:01:04.721 net/octeon_ep: not in enabled drivers build config 00:01:04.721 net/pcap: not in enabled drivers build config 00:01:04.721 net/pfe: not in enabled drivers build config 00:01:04.721 net/qede: not in enabled drivers build config 00:01:04.721 net/ring: not in enabled drivers build config 00:01:04.721 net/sfc: not in enabled drivers build config 00:01:04.721 net/softnic: not in enabled drivers build config 00:01:04.721 net/tap: not in enabled drivers build config 00:01:04.721 net/thunderx: not in enabled drivers build config 00:01:04.721 net/txgbe: not in enabled drivers build config 00:01:04.721 net/vdev_netvsc: not in enabled drivers build config 00:01:04.721 net/vhost: not in enabled drivers build config 00:01:04.721 net/virtio: not in enabled drivers build config 00:01:04.721 net/vmxnet3: not in enabled drivers build config 00:01:04.721 raw/*: missing internal dependency, "rawdev" 00:01:04.721 crypto/armv8: not in enabled drivers build config 00:01:04.721 crypto/bcmfs: not in enabled drivers build config 00:01:04.721 crypto/caam_jr: not in enabled drivers build config 00:01:04.721 crypto/ccp: not in enabled drivers build config 00:01:04.721 crypto/cnxk: not in enabled drivers build config 00:01:04.721 crypto/dpaa_sec: not in enabled drivers build config 00:01:04.721 crypto/dpaa2_sec: not in enabled drivers build config 00:01:04.721 crypto/ipsec_mb: not in enabled drivers build config 00:01:04.721 crypto/mlx5: not in enabled drivers build config 00:01:04.721 crypto/mvsam: not in enabled drivers build config 00:01:04.721 crypto/nitrox: not in enabled drivers build config 00:01:04.721 crypto/null: not in enabled drivers build config 00:01:04.721 crypto/octeontx: not in enabled drivers build config 00:01:04.721 crypto/openssl: not in enabled drivers build config 00:01:04.721 crypto/scheduler: not in enabled drivers build config 00:01:04.721 crypto/uadk: not in enabled drivers build config 00:01:04.721 crypto/virtio: not in enabled drivers build config 00:01:04.721 compress/isal: not in enabled drivers build config 00:01:04.721 compress/mlx5: not in enabled drivers build config 00:01:04.721 compress/nitrox: not in enabled drivers build config 00:01:04.721 compress/octeontx: not in enabled drivers build config 00:01:04.721 compress/zlib: not in enabled drivers build config 00:01:04.721 regex/*: missing internal dependency, "regexdev" 00:01:04.721 ml/*: missing internal dependency, "mldev" 00:01:04.721 vdpa/ifc: not in enabled drivers build config 00:01:04.721 vdpa/mlx5: not in enabled drivers build config 00:01:04.721 vdpa/nfp: not in enabled drivers build config 00:01:04.722 vdpa/sfc: not in enabled drivers build config 00:01:04.722 event/*: missing internal dependency, "eventdev" 00:01:04.722 baseband/*: missing internal dependency, "bbdev" 00:01:04.722 gpu/*: missing internal dependency, "gpudev" 00:01:04.722 00:01:04.722 00:01:04.722 Build targets in project: 85 00:01:04.722 00:01:04.722 DPDK 24.03.0 00:01:04.722 00:01:04.722 User defined options 00:01:04.722 buildtype : debug 00:01:04.722 default_library : shared 00:01:04.722 libdir : lib 00:01:04.722 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:04.722 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:04.722 c_link_args : 00:01:04.722 cpu_instruction_set: native 00:01:04.722 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:04.722 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:04.722 enable_docs : false 00:01:04.722 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:04.722 enable_kmods : false 00:01:04.722 max_lcores : 128 00:01:04.722 tests : false 00:01:04.722 00:01:04.722 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.722 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:04.722 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:04.722 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:04.722 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:04.983 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:04.983 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:04.983 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:04.983 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:04.983 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:04.983 [9/268] Linking static target lib/librte_kvargs.a 00:01:04.983 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:04.983 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:04.983 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:04.983 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:04.983 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:04.983 [15/268] Linking static target lib/librte_log.a 00:01:04.983 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:05.563 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.830 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:05.830 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:05.830 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:05.830 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:05.830 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:05.830 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:05.830 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:05.830 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:05.830 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:05.830 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:05.830 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:05.830 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:05.830 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:05.830 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:05.830 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:05.830 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:05.830 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:05.830 [35/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:05.830 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:05.830 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:05.830 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:05.830 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:05.830 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:05.830 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:05.830 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:05.830 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:05.830 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:05.830 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:05.830 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:05.830 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:05.830 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:05.830 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:05.830 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:05.830 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:05.830 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:05.830 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:05.830 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:05.830 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:05.830 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:05.830 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:05.830 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:06.122 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:06.122 [60/268] Linking static target lib/librte_telemetry.a 00:01:06.122 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:06.122 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:06.122 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:06.122 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:06.122 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.402 [66/268] Linking target lib/librte_log.so.24.1 00:01:06.402 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.402 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:06.402 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:06.402 [70/268] Linking static target lib/librte_pci.a 00:01:06.402 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:06.668 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:06.668 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:06.668 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:06.668 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:06.668 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.668 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:06.668 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:06.668 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:06.668 [80/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:06.668 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:06.668 [82/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:06.668 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:06.668 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:06.668 [85/268] Linking static target lib/librte_ring.a 00:01:06.668 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:06.668 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:06.668 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:06.668 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:06.668 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:06.668 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:06.668 [92/268] Linking static target lib/librte_meter.a 00:01:06.668 [93/268] Linking target lib/librte_kvargs.so.24.1 00:01:06.929 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:06.929 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:06.929 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:06.929 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:06.929 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:06.929 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:06.929 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:06.929 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:06.929 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:06.929 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:06.929 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:06.929 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:06.929 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:06.929 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:06.929 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:06.929 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:06.929 [110/268] Linking static target lib/librte_eal.a 00:01:06.929 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:06.929 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:06.929 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:06.929 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:06.929 [115/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.929 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:06.929 [117/268] Linking static target lib/librte_mempool.a 00:01:06.929 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.929 [119/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:06.929 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:06.929 [121/268] Linking static target lib/librte_rcu.a 00:01:07.190 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.190 [123/268] Linking target lib/librte_telemetry.so.24.1 00:01:07.190 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.190 [125/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:07.190 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.190 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.190 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:07.190 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:07.190 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:07.190 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.454 [132/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:07.454 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:07.454 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.454 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.454 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.454 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.454 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.454 [139/268] Linking static target lib/librte_net.a 00:01:07.454 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:07.454 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:07.716 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.716 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:07.716 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.716 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:07.716 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.716 [147/268] Linking static target lib/librte_cmdline.a 00:01:07.716 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.716 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.716 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:07.716 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:07.716 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:07.716 [153/268] Linking static target lib/librte_timer.a 00:01:07.978 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:07.978 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:07.978 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:07.978 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.978 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:07.978 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:07.978 [160/268] Linking static target lib/librte_dmadev.a 00:01:07.978 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:07.978 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:07.978 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:07.978 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:07.978 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.978 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.236 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.236 [168/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.236 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.236 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.236 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.236 [172/268] Linking static target lib/librte_power.a 00:01:08.236 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.236 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.236 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.236 [176/268] Linking static target lib/librte_compressdev.a 00:01:08.236 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.494 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.494 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.494 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.494 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.494 [182/268] Linking static target lib/librte_hash.a 00:01:08.494 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.494 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.494 [185/268] Linking static target lib/librte_reorder.a 00:01:08.494 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.494 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.494 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.494 [189/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.494 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.494 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.494 [192/268] Linking static target lib/librte_mbuf.a 00:01:08.494 [193/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.494 [194/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.494 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.494 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.494 [197/268] Linking static target lib/librte_security.a 00:01:08.753 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.753 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.753 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.753 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.753 [202/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.753 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.753 [204/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.753 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.753 [206/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.753 [207/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.753 [208/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.753 [209/268] Linking static target drivers/librte_bus_vdev.a 00:01:08.753 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.753 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.753 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:08.753 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.753 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:08.753 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.753 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:09.011 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:09.011 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.011 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.011 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:09.012 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.012 [222/268] Linking static target lib/librte_ethdev.a 00:01:09.012 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.012 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:09.012 [225/268] Linking static target lib/librte_cryptodev.a 00:01:09.270 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.203 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.578 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.477 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.477 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.477 [231/268] Linking target lib/librte_eal.so.24.1 00:01:13.477 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:13.477 [233/268] Linking target lib/librte_meter.so.24.1 00:01:13.477 [234/268] Linking target lib/librte_pci.so.24.1 00:01:13.477 [235/268] Linking target lib/librte_ring.so.24.1 00:01:13.477 [236/268] Linking target lib/librte_timer.so.24.1 00:01:13.477 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:13.477 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:13.477 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:13.477 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:13.477 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:13.477 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:13.477 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:13.477 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:13.477 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:13.477 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:13.735 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:13.735 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:13.735 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:13.735 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:13.994 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:13.994 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:13.994 [253/268] Linking target lib/librte_net.so.24.1 00:01:13.994 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:13.994 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:13.994 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:13.994 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:13.994 [258/268] Linking target lib/librte_hash.so.24.1 00:01:13.994 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:13.994 [260/268] Linking target lib/librte_security.so.24.1 00:01:13.994 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:14.252 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:14.252 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:14.252 [264/268] Linking target lib/librte_power.so.24.1 00:01:17.538 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:17.538 [266/268] Linking static target lib/librte_vhost.a 00:01:18.471 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.471 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:18.471 INFO: autodetecting backend as ninja 00:01:18.471 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:19.436 CC lib/ut_mock/mock.o 00:01:19.436 CC lib/ut/ut.o 00:01:19.436 CC lib/log/log.o 00:01:19.436 CC lib/log/log_flags.o 00:01:19.436 CC lib/log/log_deprecated.o 00:01:19.694 LIB libspdk_log.a 00:01:19.694 LIB libspdk_ut.a 00:01:19.694 LIB libspdk_ut_mock.a 00:01:19.694 SO libspdk_log.so.7.0 00:01:19.694 SO libspdk_ut.so.2.0 00:01:19.694 SO libspdk_ut_mock.so.6.0 00:01:19.694 SYMLINK libspdk_ut.so 00:01:19.694 SYMLINK libspdk_ut_mock.so 00:01:19.694 SYMLINK libspdk_log.so 00:01:19.952 CC lib/ioat/ioat.o 00:01:19.952 CC lib/dma/dma.o 00:01:19.952 CXX lib/trace_parser/trace.o 00:01:19.952 CC lib/util/base64.o 00:01:19.952 CC lib/util/bit_array.o 00:01:19.952 CC lib/util/cpuset.o 00:01:19.952 CC lib/util/crc16.o 00:01:19.952 CC lib/util/crc32.o 00:01:19.952 CC lib/util/crc32c.o 00:01:19.952 CC lib/util/crc32_ieee.o 00:01:19.952 CC lib/util/crc64.o 00:01:19.953 CC lib/util/dif.o 00:01:19.953 CC lib/util/fd.o 00:01:19.953 CC lib/util/file.o 00:01:19.953 CC lib/util/hexlify.o 00:01:19.953 CC lib/util/iov.o 00:01:19.953 CC lib/util/math.o 00:01:19.953 CC lib/util/pipe.o 00:01:19.953 CC lib/util/strerror_tls.o 00:01:19.953 CC lib/util/string.o 00:01:19.953 CC lib/util/uuid.o 00:01:19.953 CC lib/util/fd_group.o 00:01:19.953 CC lib/util/xor.o 00:01:19.953 CC lib/util/zipf.o 00:01:19.953 CC lib/vfio_user/host/vfio_user_pci.o 00:01:19.953 CC lib/vfio_user/host/vfio_user.o 00:01:20.211 LIB libspdk_dma.a 00:01:20.211 SO libspdk_dma.so.4.0 00:01:20.211 SYMLINK libspdk_dma.so 00:01:20.211 LIB libspdk_ioat.a 00:01:20.211 SO libspdk_ioat.so.7.0 00:01:20.211 LIB libspdk_vfio_user.a 00:01:20.211 SYMLINK libspdk_ioat.so 00:01:20.211 SO libspdk_vfio_user.so.5.0 00:01:20.211 SYMLINK libspdk_vfio_user.so 00:01:20.469 LIB libspdk_util.a 00:01:20.469 SO libspdk_util.so.9.1 00:01:20.728 SYMLINK libspdk_util.so 00:01:20.728 CC lib/rdma_provider/common.o 00:01:20.728 CC lib/rdma_utils/rdma_utils.o 00:01:20.728 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:20.728 CC lib/vmd/vmd.o 00:01:20.728 CC lib/vmd/led.o 00:01:20.728 CC lib/idxd/idxd.o 00:01:20.728 CC lib/env_dpdk/env.o 00:01:20.728 CC lib/json/json_parse.o 00:01:20.728 CC lib/conf/conf.o 00:01:20.728 CC lib/idxd/idxd_user.o 00:01:20.728 CC lib/json/json_util.o 00:01:20.728 CC lib/env_dpdk/memory.o 00:01:20.728 CC lib/json/json_write.o 00:01:20.728 CC lib/env_dpdk/pci.o 00:01:20.728 CC lib/idxd/idxd_kernel.o 00:01:20.728 CC lib/env_dpdk/init.o 00:01:20.728 CC lib/env_dpdk/threads.o 00:01:20.728 CC lib/env_dpdk/pci_ioat.o 00:01:20.728 CC lib/env_dpdk/pci_virtio.o 00:01:20.728 CC lib/env_dpdk/pci_vmd.o 00:01:20.728 CC lib/env_dpdk/pci_idxd.o 00:01:20.728 CC lib/env_dpdk/pci_event.o 00:01:20.728 CC lib/env_dpdk/sigbus_handler.o 00:01:20.728 CC lib/env_dpdk/pci_dpdk.o 00:01:20.728 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:20.728 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:20.728 LIB libspdk_trace_parser.a 00:01:20.999 SO libspdk_trace_parser.so.5.0 00:01:20.999 SYMLINK libspdk_trace_parser.so 00:01:20.999 LIB libspdk_conf.a 00:01:20.999 SO libspdk_conf.so.6.0 00:01:20.999 LIB libspdk_rdma_utils.a 00:01:20.999 LIB libspdk_rdma_provider.a 00:01:20.999 SO libspdk_rdma_utils.so.1.0 00:01:21.280 LIB libspdk_json.a 00:01:21.280 SO libspdk_rdma_provider.so.6.0 00:01:21.280 SYMLINK libspdk_conf.so 00:01:21.280 SO libspdk_json.so.6.0 00:01:21.280 SYMLINK libspdk_rdma_utils.so 00:01:21.280 SYMLINK libspdk_rdma_provider.so 00:01:21.280 SYMLINK libspdk_json.so 00:01:21.280 CC lib/jsonrpc/jsonrpc_server.o 00:01:21.280 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:21.280 CC lib/jsonrpc/jsonrpc_client.o 00:01:21.280 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:21.280 LIB libspdk_idxd.a 00:01:21.280 SO libspdk_idxd.so.12.0 00:01:21.539 SYMLINK libspdk_idxd.so 00:01:21.539 LIB libspdk_vmd.a 00:01:21.539 SO libspdk_vmd.so.6.0 00:01:21.539 SYMLINK libspdk_vmd.so 00:01:21.539 LIB libspdk_jsonrpc.a 00:01:21.797 SO libspdk_jsonrpc.so.6.0 00:01:21.797 SYMLINK libspdk_jsonrpc.so 00:01:21.797 CC lib/rpc/rpc.o 00:01:22.054 LIB libspdk_rpc.a 00:01:22.054 SO libspdk_rpc.so.6.0 00:01:22.312 SYMLINK libspdk_rpc.so 00:01:22.312 CC lib/trace/trace.o 00:01:22.312 CC lib/keyring/keyring.o 00:01:22.312 CC lib/notify/notify.o 00:01:22.312 CC lib/trace/trace_flags.o 00:01:22.312 CC lib/keyring/keyring_rpc.o 00:01:22.312 CC lib/notify/notify_rpc.o 00:01:22.312 CC lib/trace/trace_rpc.o 00:01:22.570 LIB libspdk_notify.a 00:01:22.570 SO libspdk_notify.so.6.0 00:01:22.570 LIB libspdk_keyring.a 00:01:22.570 SYMLINK libspdk_notify.so 00:01:22.570 LIB libspdk_trace.a 00:01:22.570 SO libspdk_keyring.so.1.0 00:01:22.570 SO libspdk_trace.so.10.0 00:01:22.570 SYMLINK libspdk_keyring.so 00:01:22.829 SYMLINK libspdk_trace.so 00:01:22.829 LIB libspdk_env_dpdk.a 00:01:22.829 CC lib/sock/sock.o 00:01:22.829 CC lib/sock/sock_rpc.o 00:01:22.829 CC lib/thread/thread.o 00:01:22.829 CC lib/thread/iobuf.o 00:01:22.829 SO libspdk_env_dpdk.so.14.1 00:01:23.087 SYMLINK libspdk_env_dpdk.so 00:01:23.345 LIB libspdk_sock.a 00:01:23.345 SO libspdk_sock.so.10.0 00:01:23.345 SYMLINK libspdk_sock.so 00:01:23.604 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:23.604 CC lib/nvme/nvme_ctrlr.o 00:01:23.604 CC lib/nvme/nvme_fabric.o 00:01:23.604 CC lib/nvme/nvme_ns_cmd.o 00:01:23.604 CC lib/nvme/nvme_ns.o 00:01:23.604 CC lib/nvme/nvme_pcie_common.o 00:01:23.604 CC lib/nvme/nvme_pcie.o 00:01:23.604 CC lib/nvme/nvme_qpair.o 00:01:23.604 CC lib/nvme/nvme.o 00:01:23.604 CC lib/nvme/nvme_quirks.o 00:01:23.604 CC lib/nvme/nvme_transport.o 00:01:23.604 CC lib/nvme/nvme_discovery.o 00:01:23.604 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:23.604 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:23.604 CC lib/nvme/nvme_tcp.o 00:01:23.604 CC lib/nvme/nvme_opal.o 00:01:23.604 CC lib/nvme/nvme_io_msg.o 00:01:23.604 CC lib/nvme/nvme_poll_group.o 00:01:23.604 CC lib/nvme/nvme_zns.o 00:01:23.604 CC lib/nvme/nvme_stubs.o 00:01:23.604 CC lib/nvme/nvme_auth.o 00:01:23.604 CC lib/nvme/nvme_cuse.o 00:01:23.604 CC lib/nvme/nvme_vfio_user.o 00:01:23.604 CC lib/nvme/nvme_rdma.o 00:01:24.538 LIB libspdk_thread.a 00:01:24.538 SO libspdk_thread.so.10.1 00:01:24.538 SYMLINK libspdk_thread.so 00:01:24.795 CC lib/virtio/virtio.o 00:01:24.795 CC lib/init/json_config.o 00:01:24.795 CC lib/accel/accel.o 00:01:24.795 CC lib/virtio/virtio_vhost_user.o 00:01:24.795 CC lib/accel/accel_rpc.o 00:01:24.795 CC lib/init/subsystem.o 00:01:24.795 CC lib/virtio/virtio_vfio_user.o 00:01:24.795 CC lib/init/subsystem_rpc.o 00:01:24.795 CC lib/accel/accel_sw.o 00:01:24.795 CC lib/virtio/virtio_pci.o 00:01:24.795 CC lib/init/rpc.o 00:01:24.795 CC lib/blob/blobstore.o 00:01:24.795 CC lib/vfu_tgt/tgt_endpoint.o 00:01:24.795 CC lib/vfu_tgt/tgt_rpc.o 00:01:24.795 CC lib/blob/request.o 00:01:24.795 CC lib/blob/zeroes.o 00:01:24.795 CC lib/blob/blob_bs_dev.o 00:01:25.051 LIB libspdk_init.a 00:01:25.051 SO libspdk_init.so.5.0 00:01:25.051 LIB libspdk_virtio.a 00:01:25.051 LIB libspdk_vfu_tgt.a 00:01:25.051 SYMLINK libspdk_init.so 00:01:25.051 SO libspdk_vfu_tgt.so.3.0 00:01:25.051 SO libspdk_virtio.so.7.0 00:01:25.051 SYMLINK libspdk_vfu_tgt.so 00:01:25.051 SYMLINK libspdk_virtio.so 00:01:25.307 CC lib/event/app.o 00:01:25.307 CC lib/event/reactor.o 00:01:25.307 CC lib/event/log_rpc.o 00:01:25.307 CC lib/event/app_rpc.o 00:01:25.307 CC lib/event/scheduler_static.o 00:01:25.563 LIB libspdk_event.a 00:01:25.563 SO libspdk_event.so.14.0 00:01:25.821 LIB libspdk_accel.a 00:01:25.821 SYMLINK libspdk_event.so 00:01:25.821 SO libspdk_accel.so.15.1 00:01:25.821 LIB libspdk_nvme.a 00:01:25.821 SYMLINK libspdk_accel.so 00:01:25.821 SO libspdk_nvme.so.13.1 00:01:26.078 CC lib/bdev/bdev.o 00:01:26.078 CC lib/bdev/bdev_rpc.o 00:01:26.078 CC lib/bdev/bdev_zone.o 00:01:26.078 CC lib/bdev/part.o 00:01:26.078 CC lib/bdev/scsi_nvme.o 00:01:26.336 SYMLINK libspdk_nvme.so 00:01:27.707 LIB libspdk_blob.a 00:01:27.707 SO libspdk_blob.so.11.0 00:01:27.707 SYMLINK libspdk_blob.so 00:01:27.964 CC lib/lvol/lvol.o 00:01:27.964 CC lib/blobfs/blobfs.o 00:01:27.964 CC lib/blobfs/tree.o 00:01:28.530 LIB libspdk_bdev.a 00:01:28.530 SO libspdk_bdev.so.15.1 00:01:28.530 SYMLINK libspdk_bdev.so 00:01:28.798 LIB libspdk_blobfs.a 00:01:28.798 SO libspdk_blobfs.so.10.0 00:01:28.798 SYMLINK libspdk_blobfs.so 00:01:28.798 LIB libspdk_lvol.a 00:01:28.798 CC lib/scsi/dev.o 00:01:28.798 CC lib/nvmf/ctrlr.o 00:01:28.798 CC lib/nbd/nbd.o 00:01:28.798 CC lib/ublk/ublk.o 00:01:28.798 CC lib/ftl/ftl_core.o 00:01:28.798 CC lib/nvmf/ctrlr_discovery.o 00:01:28.798 CC lib/scsi/lun.o 00:01:28.798 CC lib/nbd/nbd_rpc.o 00:01:28.798 CC lib/ublk/ublk_rpc.o 00:01:28.798 CC lib/ftl/ftl_init.o 00:01:28.798 CC lib/nvmf/ctrlr_bdev.o 00:01:28.798 CC lib/ftl/ftl_layout.o 00:01:28.798 CC lib/scsi/port.o 00:01:28.798 CC lib/nvmf/subsystem.o 00:01:28.798 CC lib/ftl/ftl_debug.o 00:01:28.798 CC lib/nvmf/nvmf.o 00:01:28.798 CC lib/scsi/scsi.o 00:01:28.798 CC lib/ftl/ftl_io.o 00:01:28.798 CC lib/scsi/scsi_bdev.o 00:01:28.798 CC lib/nvmf/nvmf_rpc.o 00:01:28.798 CC lib/ftl/ftl_sb.o 00:01:28.798 CC lib/nvmf/transport.o 00:01:28.798 CC lib/ftl/ftl_l2p.o 00:01:28.798 CC lib/scsi/scsi_pr.o 00:01:28.798 CC lib/scsi/scsi_rpc.o 00:01:28.798 CC lib/ftl/ftl_l2p_flat.o 00:01:28.798 CC lib/nvmf/tcp.o 00:01:28.798 CC lib/scsi/task.o 00:01:28.798 CC lib/ftl/ftl_nv_cache.o 00:01:28.798 CC lib/nvmf/stubs.o 00:01:28.798 CC lib/ftl/ftl_band.o 00:01:28.798 CC lib/nvmf/mdns_server.o 00:01:28.798 CC lib/ftl/ftl_band_ops.o 00:01:28.798 CC lib/nvmf/vfio_user.o 00:01:28.798 CC lib/ftl/ftl_rq.o 00:01:28.798 CC lib/ftl/ftl_writer.o 00:01:28.798 CC lib/nvmf/rdma.o 00:01:28.798 CC lib/nvmf/auth.o 00:01:28.798 CC lib/ftl/ftl_reloc.o 00:01:28.798 CC lib/ftl/ftl_l2p_cache.o 00:01:28.798 CC lib/ftl/ftl_p2l.o 00:01:28.798 CC lib/ftl/mngt/ftl_mngt.o 00:01:28.798 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:28.798 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:28.798 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:28.798 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:28.798 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:28.798 SO libspdk_lvol.so.10.0 00:01:29.059 SYMLINK libspdk_lvol.so 00:01:29.059 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:29.323 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:29.323 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:29.323 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:29.323 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:29.323 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:29.323 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:29.323 CC lib/ftl/utils/ftl_conf.o 00:01:29.323 CC lib/ftl/utils/ftl_md.o 00:01:29.323 CC lib/ftl/utils/ftl_mempool.o 00:01:29.323 CC lib/ftl/utils/ftl_bitmap.o 00:01:29.323 CC lib/ftl/utils/ftl_property.o 00:01:29.323 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:29.323 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:29.323 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:29.323 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:29.323 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:29.323 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:29.323 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:29.323 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:29.323 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:29.583 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:29.583 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:29.583 CC lib/ftl/base/ftl_base_dev.o 00:01:29.583 CC lib/ftl/base/ftl_base_bdev.o 00:01:29.583 CC lib/ftl/ftl_trace.o 00:01:29.583 LIB libspdk_nbd.a 00:01:29.583 SO libspdk_nbd.so.7.0 00:01:29.840 SYMLINK libspdk_nbd.so 00:01:29.841 LIB libspdk_scsi.a 00:01:29.841 SO libspdk_scsi.so.9.0 00:01:29.841 LIB libspdk_ublk.a 00:01:29.841 SO libspdk_ublk.so.3.0 00:01:29.841 SYMLINK libspdk_scsi.so 00:01:29.841 SYMLINK libspdk_ublk.so 00:01:30.098 CC lib/vhost/vhost.o 00:01:30.098 CC lib/iscsi/conn.o 00:01:30.098 CC lib/iscsi/init_grp.o 00:01:30.098 CC lib/vhost/vhost_rpc.o 00:01:30.098 CC lib/vhost/vhost_scsi.o 00:01:30.098 CC lib/iscsi/iscsi.o 00:01:30.098 CC lib/vhost/vhost_blk.o 00:01:30.098 CC lib/iscsi/md5.o 00:01:30.098 CC lib/vhost/rte_vhost_user.o 00:01:30.098 CC lib/iscsi/param.o 00:01:30.098 CC lib/iscsi/portal_grp.o 00:01:30.098 CC lib/iscsi/tgt_node.o 00:01:30.098 CC lib/iscsi/iscsi_subsystem.o 00:01:30.098 CC lib/iscsi/iscsi_rpc.o 00:01:30.098 CC lib/iscsi/task.o 00:01:30.356 LIB libspdk_ftl.a 00:01:30.356 SO libspdk_ftl.so.9.0 00:01:30.921 SYMLINK libspdk_ftl.so 00:01:31.488 LIB libspdk_vhost.a 00:01:31.488 LIB libspdk_nvmf.a 00:01:31.488 SO libspdk_vhost.so.8.0 00:01:31.488 SO libspdk_nvmf.so.18.1 00:01:31.488 SYMLINK libspdk_vhost.so 00:01:31.488 LIB libspdk_iscsi.a 00:01:31.488 SO libspdk_iscsi.so.8.0 00:01:31.745 SYMLINK libspdk_nvmf.so 00:01:31.745 SYMLINK libspdk_iscsi.so 00:01:32.002 CC module/vfu_device/vfu_virtio.o 00:01:32.002 CC module/env_dpdk/env_dpdk_rpc.o 00:01:32.002 CC module/vfu_device/vfu_virtio_blk.o 00:01:32.002 CC module/vfu_device/vfu_virtio_scsi.o 00:01:32.002 CC module/vfu_device/vfu_virtio_rpc.o 00:01:32.002 CC module/accel/error/accel_error.o 00:01:32.002 CC module/blob/bdev/blob_bdev.o 00:01:32.002 CC module/keyring/file/keyring.o 00:01:32.002 CC module/keyring/linux/keyring.o 00:01:32.002 CC module/accel/dsa/accel_dsa.o 00:01:32.002 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:32.002 CC module/scheduler/gscheduler/gscheduler.o 00:01:32.002 CC module/sock/posix/posix.o 00:01:32.002 CC module/keyring/file/keyring_rpc.o 00:01:32.002 CC module/accel/error/accel_error_rpc.o 00:01:32.002 CC module/accel/ioat/accel_ioat.o 00:01:32.002 CC module/accel/dsa/accel_dsa_rpc.o 00:01:32.002 CC module/keyring/linux/keyring_rpc.o 00:01:32.002 CC module/accel/ioat/accel_ioat_rpc.o 00:01:32.002 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:32.002 CC module/accel/iaa/accel_iaa.o 00:01:32.002 CC module/accel/iaa/accel_iaa_rpc.o 00:01:32.260 LIB libspdk_env_dpdk_rpc.a 00:01:32.260 SO libspdk_env_dpdk_rpc.so.6.0 00:01:32.260 SYMLINK libspdk_env_dpdk_rpc.so 00:01:32.260 LIB libspdk_keyring_linux.a 00:01:32.260 LIB libspdk_keyring_file.a 00:01:32.260 LIB libspdk_scheduler_gscheduler.a 00:01:32.260 LIB libspdk_scheduler_dpdk_governor.a 00:01:32.260 SO libspdk_keyring_linux.so.1.0 00:01:32.260 SO libspdk_keyring_file.so.1.0 00:01:32.260 SO libspdk_scheduler_gscheduler.so.4.0 00:01:32.260 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:32.260 LIB libspdk_accel_error.a 00:01:32.260 LIB libspdk_accel_ioat.a 00:01:32.260 LIB libspdk_scheduler_dynamic.a 00:01:32.260 LIB libspdk_accel_iaa.a 00:01:32.260 SO libspdk_accel_error.so.2.0 00:01:32.260 SO libspdk_scheduler_dynamic.so.4.0 00:01:32.260 SO libspdk_accel_ioat.so.6.0 00:01:32.260 SYMLINK libspdk_keyring_file.so 00:01:32.260 SYMLINK libspdk_keyring_linux.so 00:01:32.260 SYMLINK libspdk_scheduler_gscheduler.so 00:01:32.260 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:32.260 SO libspdk_accel_iaa.so.3.0 00:01:32.260 LIB libspdk_accel_dsa.a 00:01:32.260 SYMLINK libspdk_scheduler_dynamic.so 00:01:32.260 LIB libspdk_blob_bdev.a 00:01:32.260 SYMLINK libspdk_accel_error.so 00:01:32.260 SYMLINK libspdk_accel_ioat.so 00:01:32.518 SYMLINK libspdk_accel_iaa.so 00:01:32.518 SO libspdk_accel_dsa.so.5.0 00:01:32.518 SO libspdk_blob_bdev.so.11.0 00:01:32.518 SYMLINK libspdk_blob_bdev.so 00:01:32.518 SYMLINK libspdk_accel_dsa.so 00:01:32.778 LIB libspdk_vfu_device.a 00:01:32.778 SO libspdk_vfu_device.so.3.0 00:01:32.778 CC module/bdev/delay/vbdev_delay.o 00:01:32.778 CC module/bdev/null/bdev_null.o 00:01:32.778 CC module/bdev/gpt/gpt.o 00:01:32.778 CC module/bdev/error/vbdev_error.o 00:01:32.778 CC module/bdev/split/vbdev_split.o 00:01:32.778 CC module/blobfs/bdev/blobfs_bdev.o 00:01:32.778 CC module/bdev/null/bdev_null_rpc.o 00:01:32.778 CC module/bdev/error/vbdev_error_rpc.o 00:01:32.778 CC module/bdev/lvol/vbdev_lvol.o 00:01:32.778 CC module/bdev/nvme/bdev_nvme.o 00:01:32.778 CC module/bdev/gpt/vbdev_gpt.o 00:01:32.778 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:32.778 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:32.778 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:32.778 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:32.778 CC module/bdev/split/vbdev_split_rpc.o 00:01:32.778 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:32.778 CC module/bdev/nvme/nvme_rpc.o 00:01:32.778 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:32.778 CC module/bdev/raid/bdev_raid.o 00:01:32.778 CC module/bdev/nvme/bdev_mdns_client.o 00:01:32.778 CC module/bdev/malloc/bdev_malloc.o 00:01:32.778 CC module/bdev/passthru/vbdev_passthru.o 00:01:32.778 CC module/bdev/nvme/vbdev_opal.o 00:01:32.778 CC module/bdev/raid/bdev_raid_rpc.o 00:01:32.778 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:32.778 CC module/bdev/aio/bdev_aio.o 00:01:32.778 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:32.778 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:32.778 CC module/bdev/raid/bdev_raid_sb.o 00:01:32.778 CC module/bdev/aio/bdev_aio_rpc.o 00:01:32.778 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:32.778 CC module/bdev/raid/raid0.o 00:01:32.778 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:32.778 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:32.778 CC module/bdev/raid/raid1.o 00:01:32.778 CC module/bdev/ftl/bdev_ftl.o 00:01:32.778 CC module/bdev/raid/concat.o 00:01:32.778 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:32.778 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:32.778 CC module/bdev/iscsi/bdev_iscsi.o 00:01:32.778 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:32.778 SYMLINK libspdk_vfu_device.so 00:01:33.119 LIB libspdk_sock_posix.a 00:01:33.119 SO libspdk_sock_posix.so.6.0 00:01:33.119 LIB libspdk_blobfs_bdev.a 00:01:33.119 LIB libspdk_bdev_gpt.a 00:01:33.119 LIB libspdk_bdev_error.a 00:01:33.119 SO libspdk_blobfs_bdev.so.6.0 00:01:33.119 LIB libspdk_bdev_ftl.a 00:01:33.119 SO libspdk_bdev_gpt.so.6.0 00:01:33.119 SO libspdk_bdev_error.so.6.0 00:01:33.119 LIB libspdk_bdev_null.a 00:01:33.119 LIB libspdk_bdev_split.a 00:01:33.119 SO libspdk_bdev_ftl.so.6.0 00:01:33.119 SYMLINK libspdk_blobfs_bdev.so 00:01:33.119 SO libspdk_bdev_split.so.6.0 00:01:33.119 SO libspdk_bdev_null.so.6.0 00:01:33.119 SYMLINK libspdk_sock_posix.so 00:01:33.119 SYMLINK libspdk_bdev_gpt.so 00:01:33.119 SYMLINK libspdk_bdev_error.so 00:01:33.119 LIB libspdk_bdev_passthru.a 00:01:33.119 SYMLINK libspdk_bdev_ftl.so 00:01:33.119 SYMLINK libspdk_bdev_null.so 00:01:33.119 SYMLINK libspdk_bdev_split.so 00:01:33.119 SO libspdk_bdev_passthru.so.6.0 00:01:33.119 LIB libspdk_bdev_zone_block.a 00:01:33.378 SO libspdk_bdev_zone_block.so.6.0 00:01:33.378 LIB libspdk_bdev_iscsi.a 00:01:33.378 LIB libspdk_bdev_delay.a 00:01:33.378 SYMLINK libspdk_bdev_passthru.so 00:01:33.378 LIB libspdk_bdev_malloc.a 00:01:33.378 SO libspdk_bdev_iscsi.so.6.0 00:01:33.378 SO libspdk_bdev_delay.so.6.0 00:01:33.378 LIB libspdk_bdev_aio.a 00:01:33.378 SO libspdk_bdev_malloc.so.6.0 00:01:33.378 SYMLINK libspdk_bdev_zone_block.so 00:01:33.378 SO libspdk_bdev_aio.so.6.0 00:01:33.378 SYMLINK libspdk_bdev_iscsi.so 00:01:33.378 SYMLINK libspdk_bdev_delay.so 00:01:33.378 SYMLINK libspdk_bdev_malloc.so 00:01:33.378 SYMLINK libspdk_bdev_aio.so 00:01:33.378 LIB libspdk_bdev_virtio.a 00:01:33.378 SO libspdk_bdev_virtio.so.6.0 00:01:33.378 LIB libspdk_bdev_lvol.a 00:01:33.636 SO libspdk_bdev_lvol.so.6.0 00:01:33.636 SYMLINK libspdk_bdev_virtio.so 00:01:33.636 SYMLINK libspdk_bdev_lvol.so 00:01:33.894 LIB libspdk_bdev_raid.a 00:01:33.894 SO libspdk_bdev_raid.so.6.0 00:01:33.894 SYMLINK libspdk_bdev_raid.so 00:01:35.269 LIB libspdk_bdev_nvme.a 00:01:35.269 SO libspdk_bdev_nvme.so.7.0 00:01:35.269 SYMLINK libspdk_bdev_nvme.so 00:01:35.527 CC module/event/subsystems/iobuf/iobuf.o 00:01:35.527 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:35.527 CC module/event/subsystems/sock/sock.o 00:01:35.527 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:35.527 CC module/event/subsystems/keyring/keyring.o 00:01:35.527 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:35.527 CC module/event/subsystems/scheduler/scheduler.o 00:01:35.527 CC module/event/subsystems/vmd/vmd.o 00:01:35.527 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:35.786 LIB libspdk_event_keyring.a 00:01:35.786 LIB libspdk_event_vhost_blk.a 00:01:35.786 LIB libspdk_event_sock.a 00:01:35.786 LIB libspdk_event_scheduler.a 00:01:35.786 LIB libspdk_event_vfu_tgt.a 00:01:35.786 LIB libspdk_event_vmd.a 00:01:35.786 LIB libspdk_event_iobuf.a 00:01:35.786 SO libspdk_event_keyring.so.1.0 00:01:35.786 SO libspdk_event_vhost_blk.so.3.0 00:01:35.786 SO libspdk_event_sock.so.5.0 00:01:35.786 SO libspdk_event_scheduler.so.4.0 00:01:35.786 SO libspdk_event_vfu_tgt.so.3.0 00:01:35.786 SO libspdk_event_vmd.so.6.0 00:01:35.786 SO libspdk_event_iobuf.so.3.0 00:01:35.786 SYMLINK libspdk_event_keyring.so 00:01:35.786 SYMLINK libspdk_event_vhost_blk.so 00:01:35.786 SYMLINK libspdk_event_sock.so 00:01:35.786 SYMLINK libspdk_event_scheduler.so 00:01:35.786 SYMLINK libspdk_event_vfu_tgt.so 00:01:35.786 SYMLINK libspdk_event_vmd.so 00:01:35.786 SYMLINK libspdk_event_iobuf.so 00:01:36.044 CC module/event/subsystems/accel/accel.o 00:01:36.044 LIB libspdk_event_accel.a 00:01:36.300 SO libspdk_event_accel.so.6.0 00:01:36.300 SYMLINK libspdk_event_accel.so 00:01:36.300 CC module/event/subsystems/bdev/bdev.o 00:01:36.558 LIB libspdk_event_bdev.a 00:01:36.558 SO libspdk_event_bdev.so.6.0 00:01:36.558 SYMLINK libspdk_event_bdev.so 00:01:36.816 CC module/event/subsystems/scsi/scsi.o 00:01:36.816 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:36.816 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:36.816 CC module/event/subsystems/ublk/ublk.o 00:01:36.816 CC module/event/subsystems/nbd/nbd.o 00:01:37.074 LIB libspdk_event_nbd.a 00:01:37.074 LIB libspdk_event_ublk.a 00:01:37.074 LIB libspdk_event_scsi.a 00:01:37.074 SO libspdk_event_nbd.so.6.0 00:01:37.074 SO libspdk_event_ublk.so.3.0 00:01:37.074 SO libspdk_event_scsi.so.6.0 00:01:37.074 SYMLINK libspdk_event_ublk.so 00:01:37.074 SYMLINK libspdk_event_nbd.so 00:01:37.074 LIB libspdk_event_nvmf.a 00:01:37.074 SYMLINK libspdk_event_scsi.so 00:01:37.074 SO libspdk_event_nvmf.so.6.0 00:01:37.074 SYMLINK libspdk_event_nvmf.so 00:01:37.333 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:37.333 CC module/event/subsystems/iscsi/iscsi.o 00:01:37.333 LIB libspdk_event_vhost_scsi.a 00:01:37.333 LIB libspdk_event_iscsi.a 00:01:37.333 SO libspdk_event_vhost_scsi.so.3.0 00:01:37.333 SO libspdk_event_iscsi.so.6.0 00:01:37.333 SYMLINK libspdk_event_vhost_scsi.so 00:01:37.591 SYMLINK libspdk_event_iscsi.so 00:01:37.591 SO libspdk.so.6.0 00:01:37.591 SYMLINK libspdk.so 00:01:37.856 CXX app/trace/trace.o 00:01:37.856 CC app/spdk_top/spdk_top.o 00:01:37.856 CC app/trace_record/trace_record.o 00:01:37.856 CC app/spdk_lspci/spdk_lspci.o 00:01:37.856 CC app/spdk_nvme_identify/identify.o 00:01:37.856 CC app/spdk_nvme_perf/perf.o 00:01:37.856 CC app/spdk_nvme_discover/discovery_aer.o 00:01:37.856 CC test/rpc_client/rpc_client_test.o 00:01:37.856 TEST_HEADER include/spdk/accel.h 00:01:37.856 TEST_HEADER include/spdk/accel_module.h 00:01:37.856 TEST_HEADER include/spdk/assert.h 00:01:37.856 TEST_HEADER include/spdk/barrier.h 00:01:37.856 TEST_HEADER include/spdk/base64.h 00:01:37.856 TEST_HEADER include/spdk/bdev.h 00:01:37.856 TEST_HEADER include/spdk/bdev_module.h 00:01:37.856 TEST_HEADER include/spdk/bdev_zone.h 00:01:37.856 TEST_HEADER include/spdk/bit_array.h 00:01:37.856 TEST_HEADER include/spdk/bit_pool.h 00:01:37.856 TEST_HEADER include/spdk/blob_bdev.h 00:01:37.856 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:37.856 TEST_HEADER include/spdk/blobfs.h 00:01:37.856 TEST_HEADER include/spdk/blob.h 00:01:37.856 TEST_HEADER include/spdk/conf.h 00:01:37.856 TEST_HEADER include/spdk/config.h 00:01:37.856 TEST_HEADER include/spdk/cpuset.h 00:01:37.856 TEST_HEADER include/spdk/crc16.h 00:01:37.856 TEST_HEADER include/spdk/crc32.h 00:01:37.856 TEST_HEADER include/spdk/crc64.h 00:01:37.856 TEST_HEADER include/spdk/dif.h 00:01:37.856 TEST_HEADER include/spdk/dma.h 00:01:37.856 TEST_HEADER include/spdk/endian.h 00:01:37.856 TEST_HEADER include/spdk/env_dpdk.h 00:01:37.856 TEST_HEADER include/spdk/env.h 00:01:37.856 TEST_HEADER include/spdk/event.h 00:01:37.856 TEST_HEADER include/spdk/fd_group.h 00:01:37.856 TEST_HEADER include/spdk/fd.h 00:01:37.856 TEST_HEADER include/spdk/file.h 00:01:37.856 TEST_HEADER include/spdk/ftl.h 00:01:37.856 TEST_HEADER include/spdk/gpt_spec.h 00:01:37.856 TEST_HEADER include/spdk/hexlify.h 00:01:37.856 TEST_HEADER include/spdk/histogram_data.h 00:01:37.856 TEST_HEADER include/spdk/idxd.h 00:01:37.856 TEST_HEADER include/spdk/init.h 00:01:37.856 TEST_HEADER include/spdk/ioat_spec.h 00:01:37.856 TEST_HEADER include/spdk/idxd_spec.h 00:01:37.856 TEST_HEADER include/spdk/ioat.h 00:01:37.856 TEST_HEADER include/spdk/json.h 00:01:37.856 TEST_HEADER include/spdk/iscsi_spec.h 00:01:37.856 TEST_HEADER include/spdk/keyring.h 00:01:37.856 TEST_HEADER include/spdk/jsonrpc.h 00:01:37.856 TEST_HEADER include/spdk/keyring_module.h 00:01:37.856 TEST_HEADER include/spdk/likely.h 00:01:37.856 TEST_HEADER include/spdk/log.h 00:01:37.856 TEST_HEADER include/spdk/lvol.h 00:01:37.856 TEST_HEADER include/spdk/memory.h 00:01:37.856 TEST_HEADER include/spdk/mmio.h 00:01:37.856 TEST_HEADER include/spdk/nbd.h 00:01:37.856 TEST_HEADER include/spdk/notify.h 00:01:37.856 TEST_HEADER include/spdk/nvme.h 00:01:37.856 TEST_HEADER include/spdk/nvme_intel.h 00:01:37.856 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:37.856 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:37.856 TEST_HEADER include/spdk/nvme_spec.h 00:01:37.856 TEST_HEADER include/spdk/nvme_zns.h 00:01:37.856 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:37.856 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:37.856 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:37.856 TEST_HEADER include/spdk/nvmf.h 00:01:37.856 TEST_HEADER include/spdk/nvmf_transport.h 00:01:37.856 TEST_HEADER include/spdk/nvmf_spec.h 00:01:37.856 TEST_HEADER include/spdk/opal.h 00:01:37.856 TEST_HEADER include/spdk/opal_spec.h 00:01:37.856 TEST_HEADER include/spdk/pci_ids.h 00:01:37.856 TEST_HEADER include/spdk/pipe.h 00:01:37.856 TEST_HEADER include/spdk/queue.h 00:01:37.856 TEST_HEADER include/spdk/rpc.h 00:01:37.856 TEST_HEADER include/spdk/reduce.h 00:01:37.856 TEST_HEADER include/spdk/scheduler.h 00:01:37.856 TEST_HEADER include/spdk/scsi.h 00:01:37.856 TEST_HEADER include/spdk/scsi_spec.h 00:01:37.856 TEST_HEADER include/spdk/sock.h 00:01:37.856 TEST_HEADER include/spdk/stdinc.h 00:01:37.856 TEST_HEADER include/spdk/string.h 00:01:37.856 TEST_HEADER include/spdk/thread.h 00:01:37.856 TEST_HEADER include/spdk/trace_parser.h 00:01:37.856 TEST_HEADER include/spdk/trace.h 00:01:37.856 TEST_HEADER include/spdk/tree.h 00:01:37.856 TEST_HEADER include/spdk/ublk.h 00:01:37.856 TEST_HEADER include/spdk/util.h 00:01:37.856 CC app/spdk_dd/spdk_dd.o 00:01:37.856 TEST_HEADER include/spdk/uuid.h 00:01:37.856 TEST_HEADER include/spdk/version.h 00:01:37.856 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:37.856 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:37.856 TEST_HEADER include/spdk/vhost.h 00:01:37.856 TEST_HEADER include/spdk/vmd.h 00:01:37.856 TEST_HEADER include/spdk/xor.h 00:01:37.856 TEST_HEADER include/spdk/zipf.h 00:01:37.856 CXX test/cpp_headers/accel.o 00:01:37.856 CXX test/cpp_headers/assert.o 00:01:37.856 CXX test/cpp_headers/accel_module.o 00:01:37.856 CXX test/cpp_headers/barrier.o 00:01:37.856 CC app/iscsi_tgt/iscsi_tgt.o 00:01:37.856 CXX test/cpp_headers/base64.o 00:01:37.856 CXX test/cpp_headers/bdev.o 00:01:37.856 CXX test/cpp_headers/bdev_module.o 00:01:37.856 CXX test/cpp_headers/bdev_zone.o 00:01:37.856 CXX test/cpp_headers/bit_array.o 00:01:37.856 CXX test/cpp_headers/bit_pool.o 00:01:37.856 CXX test/cpp_headers/blob_bdev.o 00:01:37.856 CXX test/cpp_headers/blobfs_bdev.o 00:01:37.856 CXX test/cpp_headers/blobfs.o 00:01:37.856 CXX test/cpp_headers/blob.o 00:01:37.856 CXX test/cpp_headers/conf.o 00:01:37.856 CXX test/cpp_headers/config.o 00:01:37.856 CXX test/cpp_headers/cpuset.o 00:01:37.856 CXX test/cpp_headers/crc16.o 00:01:37.856 CC app/nvmf_tgt/nvmf_main.o 00:01:37.856 CXX test/cpp_headers/crc32.o 00:01:37.856 CC examples/util/zipf/zipf.o 00:01:37.856 CC test/env/pci/pci_ut.o 00:01:37.856 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:37.856 CC test/env/vtophys/vtophys.o 00:01:37.856 CC examples/ioat/perf/perf.o 00:01:37.856 CC test/env/memory/memory_ut.o 00:01:37.856 CC app/spdk_tgt/spdk_tgt.o 00:01:37.856 CC test/app/jsoncat/jsoncat.o 00:01:37.856 CC app/fio/nvme/fio_plugin.o 00:01:37.856 CC test/app/histogram_perf/histogram_perf.o 00:01:37.856 CC examples/ioat/verify/verify.o 00:01:37.856 CC test/thread/poller_perf/poller_perf.o 00:01:37.856 CC test/app/stub/stub.o 00:01:37.856 CC test/dma/test_dma/test_dma.o 00:01:38.116 CC app/fio/bdev/fio_plugin.o 00:01:38.116 CC test/app/bdev_svc/bdev_svc.o 00:01:38.116 LINK spdk_lspci 00:01:38.116 CC test/env/mem_callbacks/mem_callbacks.o 00:01:38.116 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:38.116 LINK rpc_client_test 00:01:38.116 LINK spdk_nvme_discover 00:01:38.116 LINK interrupt_tgt 00:01:38.116 LINK zipf 00:01:38.380 LINK vtophys 00:01:38.380 LINK jsoncat 00:01:38.380 LINK histogram_perf 00:01:38.380 LINK poller_perf 00:01:38.380 LINK spdk_trace_record 00:01:38.380 CXX test/cpp_headers/crc64.o 00:01:38.380 CXX test/cpp_headers/dif.o 00:01:38.380 LINK env_dpdk_post_init 00:01:38.380 CXX test/cpp_headers/dma.o 00:01:38.380 CXX test/cpp_headers/endian.o 00:01:38.380 LINK nvmf_tgt 00:01:38.380 CXX test/cpp_headers/env_dpdk.o 00:01:38.380 CXX test/cpp_headers/env.o 00:01:38.380 CXX test/cpp_headers/event.o 00:01:38.380 CXX test/cpp_headers/fd_group.o 00:01:38.380 CXX test/cpp_headers/fd.o 00:01:38.380 CXX test/cpp_headers/file.o 00:01:38.380 LINK iscsi_tgt 00:01:38.380 CXX test/cpp_headers/ftl.o 00:01:38.380 CXX test/cpp_headers/gpt_spec.o 00:01:38.380 LINK stub 00:01:38.380 CXX test/cpp_headers/hexlify.o 00:01:38.380 CXX test/cpp_headers/histogram_data.o 00:01:38.380 CXX test/cpp_headers/idxd.o 00:01:38.380 CXX test/cpp_headers/idxd_spec.o 00:01:38.380 LINK spdk_tgt 00:01:38.380 LINK bdev_svc 00:01:38.380 LINK ioat_perf 00:01:38.380 LINK verify 00:01:38.380 CXX test/cpp_headers/init.o 00:01:38.380 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:38.380 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:38.380 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:38.380 CXX test/cpp_headers/ioat.o 00:01:38.647 CXX test/cpp_headers/ioat_spec.o 00:01:38.647 CXX test/cpp_headers/iscsi_spec.o 00:01:38.647 CXX test/cpp_headers/json.o 00:01:38.647 LINK spdk_dd 00:01:38.647 LINK spdk_trace 00:01:38.647 CXX test/cpp_headers/jsonrpc.o 00:01:38.647 LINK pci_ut 00:01:38.647 CXX test/cpp_headers/keyring.o 00:01:38.647 CXX test/cpp_headers/keyring_module.o 00:01:38.647 CXX test/cpp_headers/likely.o 00:01:38.647 CXX test/cpp_headers/log.o 00:01:38.647 CXX test/cpp_headers/lvol.o 00:01:38.647 CXX test/cpp_headers/memory.o 00:01:38.647 CXX test/cpp_headers/mmio.o 00:01:38.647 CXX test/cpp_headers/nbd.o 00:01:38.647 CXX test/cpp_headers/notify.o 00:01:38.647 CXX test/cpp_headers/nvme.o 00:01:38.647 CXX test/cpp_headers/nvme_intel.o 00:01:38.647 CXX test/cpp_headers/nvme_ocssd.o 00:01:38.647 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:38.647 CXX test/cpp_headers/nvme_spec.o 00:01:38.647 CXX test/cpp_headers/nvme_zns.o 00:01:38.647 CXX test/cpp_headers/nvmf_cmd.o 00:01:38.647 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:38.647 CXX test/cpp_headers/nvmf.o 00:01:38.647 CXX test/cpp_headers/nvmf_spec.o 00:01:38.647 LINK test_dma 00:01:38.647 CXX test/cpp_headers/nvmf_transport.o 00:01:38.910 CXX test/cpp_headers/opal.o 00:01:38.910 CXX test/cpp_headers/opal_spec.o 00:01:38.910 CXX test/cpp_headers/pci_ids.o 00:01:38.910 CXX test/cpp_headers/pipe.o 00:01:38.910 CC examples/sock/hello_world/hello_sock.o 00:01:38.910 CXX test/cpp_headers/queue.o 00:01:38.910 CC examples/thread/thread/thread_ex.o 00:01:38.910 CC examples/vmd/led/led.o 00:01:38.910 CC examples/vmd/lsvmd/lsvmd.o 00:01:38.910 CC examples/idxd/perf/perf.o 00:01:38.910 LINK spdk_bdev 00:01:38.910 CXX test/cpp_headers/reduce.o 00:01:38.910 LINK nvme_fuzz 00:01:39.173 CXX test/cpp_headers/rpc.o 00:01:39.173 LINK spdk_nvme 00:01:39.173 CC test/event/event_perf/event_perf.o 00:01:39.173 CC test/event/reactor/reactor.o 00:01:39.173 CXX test/cpp_headers/scheduler.o 00:01:39.173 CXX test/cpp_headers/scsi_spec.o 00:01:39.173 CXX test/cpp_headers/scsi.o 00:01:39.173 CXX test/cpp_headers/sock.o 00:01:39.173 CXX test/cpp_headers/stdinc.o 00:01:39.173 CC test/event/reactor_perf/reactor_perf.o 00:01:39.173 CXX test/cpp_headers/string.o 00:01:39.173 CXX test/cpp_headers/thread.o 00:01:39.173 CC test/event/app_repeat/app_repeat.o 00:01:39.173 CXX test/cpp_headers/trace.o 00:01:39.173 CXX test/cpp_headers/trace_parser.o 00:01:39.173 CXX test/cpp_headers/tree.o 00:01:39.173 CXX test/cpp_headers/ublk.o 00:01:39.173 CC test/event/scheduler/scheduler.o 00:01:39.173 CXX test/cpp_headers/util.o 00:01:39.173 CXX test/cpp_headers/uuid.o 00:01:39.173 CXX test/cpp_headers/version.o 00:01:39.173 CXX test/cpp_headers/vfio_user_pci.o 00:01:39.173 CXX test/cpp_headers/vfio_user_spec.o 00:01:39.173 LINK lsvmd 00:01:39.173 CXX test/cpp_headers/vhost.o 00:01:39.173 CXX test/cpp_headers/vmd.o 00:01:39.173 CXX test/cpp_headers/xor.o 00:01:39.173 CXX test/cpp_headers/zipf.o 00:01:39.173 LINK led 00:01:39.173 LINK spdk_nvme_perf 00:01:39.173 LINK vhost_fuzz 00:01:39.173 CC app/vhost/vhost.o 00:01:39.439 LINK mem_callbacks 00:01:39.439 LINK spdk_nvme_identify 00:01:39.439 LINK hello_sock 00:01:39.439 LINK event_perf 00:01:39.439 LINK spdk_top 00:01:39.439 LINK reactor 00:01:39.439 LINK thread 00:01:39.439 LINK reactor_perf 00:01:39.439 LINK app_repeat 00:01:39.439 CC test/nvme/reset/reset.o 00:01:39.439 CC test/nvme/aer/aer.o 00:01:39.439 CC test/nvme/e2edp/nvme_dp.o 00:01:39.439 CC test/nvme/startup/startup.o 00:01:39.439 CC test/nvme/err_injection/err_injection.o 00:01:39.439 CC test/nvme/sgl/sgl.o 00:01:39.439 CC test/nvme/overhead/overhead.o 00:01:39.439 CC test/nvme/reserve/reserve.o 00:01:39.439 CC test/accel/dif/dif.o 00:01:39.704 CC test/nvme/simple_copy/simple_copy.o 00:01:39.704 CC test/blobfs/mkfs/mkfs.o 00:01:39.704 LINK idxd_perf 00:01:39.704 CC test/nvme/connect_stress/connect_stress.o 00:01:39.704 CC test/lvol/esnap/esnap.o 00:01:39.704 CC test/nvme/boot_partition/boot_partition.o 00:01:39.704 LINK scheduler 00:01:39.704 CC test/nvme/compliance/nvme_compliance.o 00:01:39.704 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:39.704 CC test/nvme/fdp/fdp.o 00:01:39.704 CC test/nvme/fused_ordering/fused_ordering.o 00:01:39.704 CC test/nvme/cuse/cuse.o 00:01:39.704 LINK vhost 00:01:39.704 LINK err_injection 00:01:39.704 LINK startup 00:01:39.963 LINK reserve 00:01:39.963 LINK mkfs 00:01:39.963 LINK connect_stress 00:01:39.963 LINK nvme_dp 00:01:39.963 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:39.963 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:39.963 CC examples/nvme/reconnect/reconnect.o 00:01:39.963 CC examples/nvme/hotplug/hotplug.o 00:01:39.963 LINK doorbell_aers 00:01:39.963 CC examples/nvme/abort/abort.o 00:01:39.963 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.963 LINK simple_copy 00:01:39.963 CC examples/nvme/arbitration/arbitration.o 00:01:39.963 CC examples/nvme/hello_world/hello_world.o 00:01:39.963 LINK sgl 00:01:39.963 LINK reset 00:01:39.963 LINK fused_ordering 00:01:39.963 CC examples/accel/perf/accel_perf.o 00:01:39.963 LINK boot_partition 00:01:39.963 CC examples/blob/cli/blobcli.o 00:01:39.963 LINK memory_ut 00:01:39.963 LINK aer 00:01:39.963 CC examples/blob/hello_world/hello_blob.o 00:01:39.963 LINK nvme_compliance 00:01:39.963 LINK overhead 00:01:40.220 LINK fdp 00:01:40.220 LINK dif 00:01:40.220 LINK hello_world 00:01:40.220 LINK pmr_persistence 00:01:40.220 LINK cmb_copy 00:01:40.220 LINK hotplug 00:01:40.220 LINK arbitration 00:01:40.220 LINK reconnect 00:01:40.478 LINK hello_blob 00:01:40.478 LINK abort 00:01:40.478 LINK accel_perf 00:01:40.478 LINK nvme_manage 00:01:40.736 LINK blobcli 00:01:40.736 CC test/bdev/bdevio/bdevio.o 00:01:40.994 CC examples/bdev/hello_world/hello_bdev.o 00:01:40.994 CC examples/bdev/bdevperf/bdevperf.o 00:01:40.994 LINK iscsi_fuzz 00:01:40.994 LINK bdevio 00:01:40.994 LINK hello_bdev 00:01:41.252 LINK cuse 00:01:41.509 LINK bdevperf 00:01:42.071 CC examples/nvmf/nvmf/nvmf.o 00:01:42.329 LINK nvmf 00:01:44.854 LINK esnap 00:01:45.113 00:01:45.113 real 0m49.942s 00:01:45.113 user 10m7.339s 00:01:45.113 sys 2m28.691s 00:01:45.113 12:54:06 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:45.113 12:54:06 make -- common/autotest_common.sh@10 -- $ set +x 00:01:45.113 ************************************ 00:01:45.113 END TEST make 00:01:45.113 ************************************ 00:01:45.113 12:54:06 -- common/autotest_common.sh@1142 -- $ return 0 00:01:45.113 12:54:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:45.113 12:54:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:45.113 12:54:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:45.113 12:54:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.113 12:54:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:45.113 12:54:06 -- pm/common@44 -- $ pid=3619072 00:01:45.113 12:54:06 -- pm/common@50 -- $ kill -TERM 3619072 00:01:45.113 12:54:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.113 12:54:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:45.113 12:54:06 -- pm/common@44 -- $ pid=3619074 00:01:45.113 12:54:06 -- pm/common@50 -- $ kill -TERM 3619074 00:01:45.113 12:54:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.113 12:54:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:45.113 12:54:06 -- pm/common@44 -- $ pid=3619076 00:01:45.113 12:54:06 -- pm/common@50 -- $ kill -TERM 3619076 00:01:45.113 12:54:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.113 12:54:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:45.113 12:54:06 -- pm/common@44 -- $ pid=3619104 00:01:45.113 12:54:06 -- pm/common@50 -- $ sudo -E kill -TERM 3619104 00:01:45.371 12:54:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:45.371 12:54:06 -- nvmf/common.sh@7 -- # uname -s 00:01:45.371 12:54:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:45.371 12:54:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:45.371 12:54:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:45.371 12:54:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:45.371 12:54:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:45.371 12:54:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:45.371 12:54:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:45.371 12:54:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:45.371 12:54:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:45.371 12:54:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:45.371 12:54:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:45.371 12:54:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:45.371 12:54:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:45.371 12:54:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:45.371 12:54:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:45.371 12:54:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:45.371 12:54:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:45.371 12:54:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:45.371 12:54:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:45.371 12:54:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:45.371 12:54:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.371 12:54:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.371 12:54:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.371 12:54:06 -- paths/export.sh@5 -- # export PATH 00:01:45.371 12:54:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.371 12:54:06 -- nvmf/common.sh@47 -- # : 0 00:01:45.371 12:54:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:45.371 12:54:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:45.371 12:54:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:45.371 12:54:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:45.371 12:54:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:45.371 12:54:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:45.371 12:54:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:45.371 12:54:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:45.371 12:54:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:45.371 12:54:06 -- spdk/autotest.sh@32 -- # uname -s 00:01:45.371 12:54:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:45.371 12:54:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:45.371 12:54:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:45.371 12:54:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:45.371 12:54:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:45.371 12:54:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:45.371 12:54:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:45.371 12:54:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:45.371 12:54:06 -- spdk/autotest.sh@48 -- # udevadm_pid=3674800 00:01:45.371 12:54:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:45.371 12:54:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:45.371 12:54:06 -- pm/common@17 -- # local monitor 00:01:45.371 12:54:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.371 12:54:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.371 12:54:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.371 12:54:06 -- pm/common@21 -- # date +%s 00:01:45.371 12:54:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.371 12:54:06 -- pm/common@25 -- # sleep 1 00:01:45.371 12:54:06 -- pm/common@21 -- # date +%s 00:01:45.371 12:54:06 -- pm/common@21 -- # date +%s 00:01:45.371 12:54:06 -- pm/common@21 -- # date +%s 00:01:45.371 12:54:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040846 00:01:45.371 12:54:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040846 00:01:45.371 12:54:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040846 00:01:45.371 12:54:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040846 00:01:45.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040846_collect-vmstat.pm.log 00:01:45.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040846_collect-cpu-load.pm.log 00:01:45.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040846_collect-cpu-temp.pm.log 00:01:45.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040846_collect-bmc-pm.bmc.pm.log 00:01:46.305 12:54:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:46.305 12:54:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:46.305 12:54:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:46.305 12:54:07 -- common/autotest_common.sh@10 -- # set +x 00:01:46.305 12:54:07 -- spdk/autotest.sh@59 -- # create_test_list 00:01:46.305 12:54:07 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:46.305 12:54:07 -- common/autotest_common.sh@10 -- # set +x 00:01:46.305 12:54:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:46.306 12:54:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.306 12:54:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.306 12:54:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:46.306 12:54:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.306 12:54:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:46.306 12:54:07 -- common/autotest_common.sh@1455 -- # uname 00:01:46.306 12:54:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:46.306 12:54:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:46.306 12:54:07 -- common/autotest_common.sh@1475 -- # uname 00:01:46.306 12:54:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:46.306 12:54:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:46.306 12:54:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:46.306 12:54:07 -- spdk/autotest.sh@72 -- # hash lcov 00:01:46.306 12:54:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:46.306 12:54:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:46.306 --rc lcov_branch_coverage=1 00:01:46.306 --rc lcov_function_coverage=1 00:01:46.306 --rc genhtml_branch_coverage=1 00:01:46.306 --rc genhtml_function_coverage=1 00:01:46.306 --rc genhtml_legend=1 00:01:46.306 --rc geninfo_all_blocks=1 00:01:46.306 ' 00:01:46.306 12:54:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:46.306 --rc lcov_branch_coverage=1 00:01:46.306 --rc lcov_function_coverage=1 00:01:46.306 --rc genhtml_branch_coverage=1 00:01:46.306 --rc genhtml_function_coverage=1 00:01:46.306 --rc genhtml_legend=1 00:01:46.306 --rc geninfo_all_blocks=1 00:01:46.306 ' 00:01:46.306 12:54:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:46.306 --rc lcov_branch_coverage=1 00:01:46.306 --rc lcov_function_coverage=1 00:01:46.306 --rc genhtml_branch_coverage=1 00:01:46.306 --rc genhtml_function_coverage=1 00:01:46.306 --rc genhtml_legend=1 00:01:46.306 --rc geninfo_all_blocks=1 00:01:46.306 --no-external' 00:01:46.306 12:54:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:46.306 --rc lcov_branch_coverage=1 00:01:46.306 --rc lcov_function_coverage=1 00:01:46.306 --rc genhtml_branch_coverage=1 00:01:46.306 --rc genhtml_function_coverage=1 00:01:46.306 --rc genhtml_legend=1 00:01:46.306 --rc geninfo_all_blocks=1 00:01:46.306 --no-external' 00:01:46.306 12:54:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:46.564 lcov: LCOV version 1.14 00:01:46.564 12:54:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:48.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:01:48.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:48.465 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:03.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:03.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:21.469 12:54:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:21.469 12:54:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:21.469 12:54:42 -- common/autotest_common.sh@10 -- # set +x 00:02:21.469 12:54:42 -- spdk/autotest.sh@91 -- # rm -f 00:02:21.469 12:54:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.736 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:21.736 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:21.736 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:21.736 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:21.736 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:21.739 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:21.739 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:21.739 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:21.739 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:21.739 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:21.739 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:21.739 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:21.739 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:21.739 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:22.015 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:22.015 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:22.015 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:22.015 12:54:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:22.015 12:54:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:22.015 12:54:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:22.015 12:54:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:22.015 12:54:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:22.015 12:54:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:22.015 12:54:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:22.015 12:54:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.015 12:54:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:22.015 12:54:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:22.015 12:54:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:22.015 12:54:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:22.015 12:54:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:22.015 12:54:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:22.015 12:54:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:22.015 No valid GPT data, bailing 00:02:22.015 12:54:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:22.015 12:54:43 -- scripts/common.sh@391 -- # pt= 00:02:22.015 12:54:43 -- scripts/common.sh@392 -- # return 1 00:02:22.015 12:54:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:22.015 1+0 records in 00:02:22.015 1+0 records out 00:02:22.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459989 s, 228 MB/s 00:02:22.015 12:54:43 -- spdk/autotest.sh@118 -- # sync 00:02:22.015 12:54:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:22.015 12:54:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:22.015 12:54:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:23.919 12:54:45 -- spdk/autotest.sh@124 -- # uname -s 00:02:23.919 12:54:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:23.919 12:54:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.919 12:54:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:23.919 12:54:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:23.919 12:54:45 -- common/autotest_common.sh@10 -- # set +x 00:02:23.919 ************************************ 00:02:23.919 START TEST setup.sh 00:02:23.919 ************************************ 00:02:23.919 12:54:45 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.919 * Looking for test storage... 00:02:23.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:23.919 12:54:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:23.919 12:54:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:23.919 12:54:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:23.919 12:54:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:23.919 12:54:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:23.919 12:54:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:23.919 ************************************ 00:02:23.919 START TEST acl 00:02:23.919 ************************************ 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:23.919 * Looking for test storage... 00:02:23.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:23.919 12:54:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.919 12:54:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:23.919 12:54:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:23.919 12:54:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:23.919 12:54:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:23.919 12:54:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:23.919 12:54:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:23.919 12:54:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.919 12:54:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.299 12:54:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:25.299 12:54:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:25.299 12:54:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.299 12:54:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:25.299 12:54:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.299 12:54:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:26.673 Hugepages 00:02:26.673 node hugesize free / total 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 00:02:26.673 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:26.673 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:26.674 12:54:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:26.674 12:54:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:26.674 12:54:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.674 12:54:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:26.674 ************************************ 00:02:26.674 START TEST denied 00:02:26.674 ************************************ 00:02:26.674 12:54:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:26.674 12:54:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:26.674 12:54:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:26.674 12:54:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:26.674 12:54:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.674 12:54:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:28.050 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.050 12:54:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.617 00:02:30.617 real 0m3.739s 00:02:30.617 user 0m1.109s 00:02:30.617 sys 0m1.743s 00:02:30.617 12:54:51 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:30.617 12:54:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:30.617 ************************************ 00:02:30.617 END TEST denied 00:02:30.617 ************************************ 00:02:30.617 12:54:51 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:30.617 12:54:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:30.617 12:54:51 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:30.617 12:54:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:30.617 12:54:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:30.617 ************************************ 00:02:30.617 START TEST allowed 00:02:30.617 ************************************ 00:02:30.617 12:54:51 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:30.617 12:54:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:30.617 12:54:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:30.617 12:54:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:30.617 12:54:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.617 12:54:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:33.152 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:33.152 12:54:54 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:33.152 12:54:54 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:33.152 12:54:54 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:33.152 12:54:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:33.152 12:54:54 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.088 00:02:34.088 real 0m3.743s 00:02:34.088 user 0m0.984s 00:02:34.088 sys 0m1.616s 00:02:34.088 12:54:55 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:34.088 12:54:55 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:34.088 ************************************ 00:02:34.088 END TEST allowed 00:02:34.088 ************************************ 00:02:34.088 12:54:55 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:34.088 00:02:34.088 real 0m10.204s 00:02:34.088 user 0m3.140s 00:02:34.088 sys 0m5.100s 00:02:34.088 12:54:55 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:34.088 12:54:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:34.088 ************************************ 00:02:34.088 END TEST acl 00:02:34.088 ************************************ 00:02:34.088 12:54:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:34.088 12:54:55 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:34.088 12:54:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.088 12:54:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.088 12:54:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:34.088 ************************************ 00:02:34.088 START TEST hugepages 00:02:34.088 ************************************ 00:02:34.088 12:54:55 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:34.348 * Looking for test storage... 00:02:34.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.348 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:34.348 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:34.348 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:34.348 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43684500 kB' 'MemAvailable: 47186708 kB' 'Buffers: 2704 kB' 'Cached: 10296992 kB' 'SwapCached: 0 kB' 'Active: 7291848 kB' 'Inactive: 3506596 kB' 'Active(anon): 6897256 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502408 kB' 'Mapped: 202656 kB' 'Shmem: 6398508 kB' 'KReclaimable: 189720 kB' 'Slab: 557588 kB' 'SReclaimable: 189720 kB' 'SUnreclaim: 367868 kB' 'KernelStack: 12800 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 8011928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:34.349 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:34.350 12:54:55 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:34.350 12:54:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.350 12:54:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.350 12:54:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:34.350 ************************************ 00:02:34.350 START TEST default_setup 00:02:34.350 ************************************ 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.350 12:54:55 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:35.728 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:35.728 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:35.728 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:35.729 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:35.729 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:35.729 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:35.729 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:35.729 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:35.729 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:36.297 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45802360 kB' 'MemAvailable: 49304552 kB' 'Buffers: 2704 kB' 'Cached: 10297080 kB' 'SwapCached: 0 kB' 'Active: 7310784 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916192 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520808 kB' 'Mapped: 202748 kB' 'Shmem: 6398596 kB' 'KReclaimable: 189688 kB' 'Slab: 557012 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367324 kB' 'KernelStack: 12736 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8032872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.559 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.560 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45802944 kB' 'MemAvailable: 49305136 kB' 'Buffers: 2704 kB' 'Cached: 10297084 kB' 'SwapCached: 0 kB' 'Active: 7310500 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915908 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520512 kB' 'Mapped: 202692 kB' 'Shmem: 6398600 kB' 'KReclaimable: 189688 kB' 'Slab: 557004 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367316 kB' 'KernelStack: 12720 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8032892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.561 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.562 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45803544 kB' 'MemAvailable: 49305736 kB' 'Buffers: 2704 kB' 'Cached: 10297100 kB' 'SwapCached: 0 kB' 'Active: 7311180 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916588 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521208 kB' 'Mapped: 202692 kB' 'Shmem: 6398616 kB' 'KReclaimable: 189688 kB' 'Slab: 557076 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367388 kB' 'KernelStack: 12768 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8039272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.563 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.564 nr_hugepages=1024 00:02:36.564 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.564 resv_hugepages=0 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.565 surplus_hugepages=0 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.565 anon_hugepages=0 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45804148 kB' 'MemAvailable: 49306340 kB' 'Buffers: 2704 kB' 'Cached: 10297124 kB' 'SwapCached: 0 kB' 'Active: 7310220 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915628 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520220 kB' 'Mapped: 202692 kB' 'Shmem: 6398640 kB' 'KReclaimable: 189688 kB' 'Slab: 557076 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367388 kB' 'KernelStack: 12688 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8032568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.565 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.566 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21542984 kB' 'MemUsed: 11333956 kB' 'SwapCached: 0 kB' 'Active: 4824544 kB' 'Inactive: 3263864 kB' 'Active(anon): 4635972 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7799728 kB' 'Mapped: 64736 kB' 'AnonPages: 291800 kB' 'Shmem: 4347292 kB' 'KernelStack: 7624 kB' 'PageTables: 4756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 309488 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.826 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:36.827 node0=1024 expecting 1024 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:36.827 00:02:36.827 real 0m2.393s 00:02:36.827 user 0m0.640s 00:02:36.827 sys 0m0.868s 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:36.827 12:54:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:36.827 ************************************ 00:02:36.827 END TEST default_setup 00:02:36.827 ************************************ 00:02:36.827 12:54:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:36.827 12:54:58 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:36.827 12:54:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:36.827 12:54:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:36.827 12:54:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:36.827 ************************************ 00:02:36.827 START TEST per_node_1G_alloc 00:02:36.827 ************************************ 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:36.827 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:36.828 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:36.828 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:36.828 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.828 12:54:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.766 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.766 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.766 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.766 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.766 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.766 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.766 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.766 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.766 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.766 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.028 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.028 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.028 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.028 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.028 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.028 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.028 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45792040 kB' 'MemAvailable: 49294232 kB' 'Buffers: 2704 kB' 'Cached: 10297204 kB' 'SwapCached: 0 kB' 'Active: 7310856 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916264 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520788 kB' 'Mapped: 202760 kB' 'Shmem: 6398720 kB' 'KReclaimable: 189688 kB' 'Slab: 557040 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367352 kB' 'KernelStack: 12720 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8033128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.028 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.029 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45792040 kB' 'MemAvailable: 49294232 kB' 'Buffers: 2704 kB' 'Cached: 10297204 kB' 'SwapCached: 0 kB' 'Active: 7310896 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916304 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520812 kB' 'Mapped: 202704 kB' 'Shmem: 6398720 kB' 'KReclaimable: 189688 kB' 'Slab: 557040 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367352 kB' 'KernelStack: 12768 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8033148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.030 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.031 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45794656 kB' 'MemAvailable: 49296848 kB' 'Buffers: 2704 kB' 'Cached: 10297224 kB' 'SwapCached: 0 kB' 'Active: 7310748 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916156 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520696 kB' 'Mapped: 202704 kB' 'Shmem: 6398740 kB' 'KReclaimable: 189688 kB' 'Slab: 557056 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367368 kB' 'KernelStack: 12752 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8033168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.032 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.033 nr_hugepages=1024 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.033 resv_hugepages=0 00:02:38.033 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.033 surplus_hugepages=0 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.034 anon_hugepages=0 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.034 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45794896 kB' 'MemAvailable: 49297088 kB' 'Buffers: 2704 kB' 'Cached: 10297248 kB' 'SwapCached: 0 kB' 'Active: 7310756 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520704 kB' 'Mapped: 202704 kB' 'Shmem: 6398764 kB' 'KReclaimable: 189688 kB' 'Slab: 557040 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367352 kB' 'KernelStack: 12752 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8033192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.296 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.297 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.298 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22590024 kB' 'MemUsed: 10286916 kB' 'SwapCached: 0 kB' 'Active: 4825084 kB' 'Inactive: 3263864 kB' 'Active(anon): 4636512 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7799740 kB' 'Mapped: 64748 kB' 'AnonPages: 292380 kB' 'Shmem: 4347304 kB' 'KernelStack: 7688 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 309488 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.299 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23205304 kB' 'MemUsed: 4459448 kB' 'SwapCached: 0 kB' 'Active: 2485736 kB' 'Inactive: 242732 kB' 'Active(anon): 2279716 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2500256 kB' 'Mapped: 137956 kB' 'AnonPages: 228328 kB' 'Shmem: 2051504 kB' 'KernelStack: 5064 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76452 kB' 'Slab: 247552 kB' 'SReclaimable: 76452 kB' 'SUnreclaim: 171100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.300 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.301 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.302 node0=512 expecting 512 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:38.302 node1=512 expecting 512 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:38.302 00:02:38.302 real 0m1.462s 00:02:38.302 user 0m0.620s 00:02:38.302 sys 0m0.805s 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:38.302 12:54:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.302 ************************************ 00:02:38.302 END TEST per_node_1G_alloc 00:02:38.302 ************************************ 00:02:38.302 12:54:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:38.302 12:54:59 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:38.302 12:54:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.302 12:54:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.302 12:54:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:38.302 ************************************ 00:02:38.302 START TEST even_2G_alloc 00:02:38.302 ************************************ 00:02:38.302 12:54:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:38.302 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:38.302 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:38.302 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:38.302 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.302 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:38.302 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.303 12:54:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:39.239 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.239 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.239 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.239 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.239 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.239 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.239 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.239 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.239 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.239 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.239 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.239 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.239 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.239 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.239 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.239 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.239 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.500 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45767352 kB' 'MemAvailable: 49269544 kB' 'Buffers: 2704 kB' 'Cached: 10297336 kB' 'SwapCached: 0 kB' 'Active: 7315516 kB' 'Inactive: 3506596 kB' 'Active(anon): 6920924 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525260 kB' 'Mapped: 203264 kB' 'Shmem: 6398852 kB' 'KReclaimable: 189688 kB' 'Slab: 556896 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367208 kB' 'KernelStack: 12720 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8037124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.501 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45764324 kB' 'MemAvailable: 49266516 kB' 'Buffers: 2704 kB' 'Cached: 10297336 kB' 'SwapCached: 0 kB' 'Active: 7317276 kB' 'Inactive: 3506596 kB' 'Active(anon): 6922684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527040 kB' 'Mapped: 203448 kB' 'Shmem: 6398852 kB' 'KReclaimable: 189688 kB' 'Slab: 556912 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367224 kB' 'KernelStack: 12752 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8039400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.502 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45769080 kB' 'MemAvailable: 49271272 kB' 'Buffers: 2704 kB' 'Cached: 10297356 kB' 'SwapCached: 0 kB' 'Active: 7308488 kB' 'Inactive: 3506596 kB' 'Active(anon): 6913896 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518276 kB' 'Mapped: 201812 kB' 'Shmem: 6398872 kB' 'KReclaimable: 189688 kB' 'Slab: 556976 kB' 'SReclaimable: 189688 kB' 'SUnreclaim: 367288 kB' 'KernelStack: 12720 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8017828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.503 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:39.504 nr_hugepages=1024 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.504 resv_hugepages=0 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.504 surplus_hugepages=0 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.504 anon_hugepages=0 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:39.504 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45772464 kB' 'MemAvailable: 49274608 kB' 'Buffers: 2704 kB' 'Cached: 10297380 kB' 'SwapCached: 0 kB' 'Active: 7308060 kB' 'Inactive: 3506596 kB' 'Active(anon): 6913468 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517796 kB' 'Mapped: 201768 kB' 'Shmem: 6398896 kB' 'KReclaimable: 189592 kB' 'Slab: 556864 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367272 kB' 'KernelStack: 12672 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8017852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.505 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22581040 kB' 'MemUsed: 10295900 kB' 'SwapCached: 0 kB' 'Active: 4822704 kB' 'Inactive: 3263864 kB' 'Active(anon): 4634132 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7799740 kB' 'Mapped: 64032 kB' 'AnonPages: 289992 kB' 'Shmem: 4347304 kB' 'KernelStack: 7608 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 309412 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.506 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23193304 kB' 'MemUsed: 4471448 kB' 'SwapCached: 0 kB' 'Active: 2485240 kB' 'Inactive: 242732 kB' 'Active(anon): 2279220 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2500384 kB' 'Mapped: 137736 kB' 'AnonPages: 227640 kB' 'Shmem: 2051632 kB' 'KernelStack: 5064 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76356 kB' 'Slab: 247420 kB' 'SReclaimable: 76356 kB' 'SUnreclaim: 171064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.507 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.508 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:39.765 node0=512 expecting 512 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:39.765 node1=512 expecting 512 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:39.765 00:02:39.765 real 0m1.367s 00:02:39.765 user 0m0.565s 00:02:39.765 sys 0m0.759s 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:39.765 12:55:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:39.765 ************************************ 00:02:39.765 END TEST even_2G_alloc 00:02:39.765 ************************************ 00:02:39.765 12:55:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:39.765 12:55:01 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:39.765 12:55:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:39.765 12:55:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.765 12:55:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:39.765 ************************************ 00:02:39.765 START TEST odd_alloc 00:02:39.765 ************************************ 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.765 12:55:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.703 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.703 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.703 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.703 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.703 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.703 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.703 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.703 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.703 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.703 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.703 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.703 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.703 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.703 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.703 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.703 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.703 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45781964 kB' 'MemAvailable: 49284108 kB' 'Buffers: 2704 kB' 'Cached: 10297468 kB' 'SwapCached: 0 kB' 'Active: 7309048 kB' 'Inactive: 3506596 kB' 'Active(anon): 6914456 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518276 kB' 'Mapped: 201912 kB' 'Shmem: 6398984 kB' 'KReclaimable: 189592 kB' 'Slab: 556884 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367292 kB' 'KernelStack: 12720 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8020412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.966 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.967 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45782080 kB' 'MemAvailable: 49284224 kB' 'Buffers: 2704 kB' 'Cached: 10297468 kB' 'SwapCached: 0 kB' 'Active: 7309788 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915196 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519436 kB' 'Mapped: 201856 kB' 'Shmem: 6398984 kB' 'KReclaimable: 189592 kB' 'Slab: 556884 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367292 kB' 'KernelStack: 12688 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8018936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.968 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.969 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45796856 kB' 'MemAvailable: 49299000 kB' 'Buffers: 2704 kB' 'Cached: 10297488 kB' 'SwapCached: 0 kB' 'Active: 7309124 kB' 'Inactive: 3506596 kB' 'Active(anon): 6914532 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518704 kB' 'Mapped: 201780 kB' 'Shmem: 6399004 kB' 'KReclaimable: 189592 kB' 'Slab: 556844 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367252 kB' 'KernelStack: 13216 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8020096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.970 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:40.971 nr_hugepages=1025 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.971 resv_hugepages=0 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.971 surplus_hugepages=0 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.971 anon_hugepages=0 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.971 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45797076 kB' 'MemAvailable: 49299220 kB' 'Buffers: 2704 kB' 'Cached: 10297508 kB' 'SwapCached: 0 kB' 'Active: 7310640 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916048 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520244 kB' 'Mapped: 201780 kB' 'Shmem: 6399024 kB' 'KReclaimable: 189592 kB' 'Slab: 556836 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367244 kB' 'KernelStack: 12976 kB' 'PageTables: 9920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8020240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.972 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22587584 kB' 'MemUsed: 10289356 kB' 'SwapCached: 0 kB' 'Active: 4824660 kB' 'Inactive: 3263864 kB' 'Active(anon): 4636088 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7799748 kB' 'Mapped: 64156 kB' 'AnonPages: 292384 kB' 'Shmem: 4347312 kB' 'KernelStack: 8056 kB' 'PageTables: 6624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 309376 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.973 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.974 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23208884 kB' 'MemUsed: 4455868 kB' 'SwapCached: 0 kB' 'Active: 2485500 kB' 'Inactive: 242732 kB' 'Active(anon): 2279480 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2500496 kB' 'Mapped: 137768 kB' 'AnonPages: 227780 kB' 'Shmem: 2051744 kB' 'KernelStack: 5096 kB' 'PageTables: 3236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76356 kB' 'Slab: 247460 kB' 'SReclaimable: 76356 kB' 'SUnreclaim: 171104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.975 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.976 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:40.977 node0=512 expecting 513 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:40.977 node1=513 expecting 512 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:40.977 00:02:40.977 real 0m1.401s 00:02:40.977 user 0m0.569s 00:02:40.977 sys 0m0.790s 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:40.977 12:55:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.977 ************************************ 00:02:40.977 END TEST odd_alloc 00:02:40.977 ************************************ 00:02:41.237 12:55:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:41.237 12:55:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:41.237 12:55:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.237 12:55:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.237 12:55:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.237 ************************************ 00:02:41.237 START TEST custom_alloc 00:02:41.237 ************************************ 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.237 12:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:42.172 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.172 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:42.172 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.172 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.172 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.172 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.172 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.172 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.172 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.172 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.172 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.172 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.172 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.172 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.172 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.172 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.172 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44738312 kB' 'MemAvailable: 48240456 kB' 'Buffers: 2704 kB' 'Cached: 10297600 kB' 'SwapCached: 0 kB' 'Active: 7308524 kB' 'Inactive: 3506596 kB' 'Active(anon): 6913932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518096 kB' 'Mapped: 201968 kB' 'Shmem: 6399116 kB' 'KReclaimable: 189592 kB' 'Slab: 556812 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367220 kB' 'KernelStack: 12720 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8018472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.435 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.436 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44738060 kB' 'MemAvailable: 48240204 kB' 'Buffers: 2704 kB' 'Cached: 10297604 kB' 'SwapCached: 0 kB' 'Active: 7308692 kB' 'Inactive: 3506596 kB' 'Active(anon): 6914100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518256 kB' 'Mapped: 201912 kB' 'Shmem: 6399120 kB' 'KReclaimable: 189592 kB' 'Slab: 556812 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367220 kB' 'KernelStack: 12768 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8018492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.437 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.438 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.439 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44741292 kB' 'MemAvailable: 48243436 kB' 'Buffers: 2704 kB' 'Cached: 10297616 kB' 'SwapCached: 0 kB' 'Active: 7308396 kB' 'Inactive: 3506596 kB' 'Active(anon): 6913804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517888 kB' 'Mapped: 201796 kB' 'Shmem: 6399132 kB' 'KReclaimable: 189592 kB' 'Slab: 556772 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367180 kB' 'KernelStack: 12752 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8018512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.440 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.441 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.442 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:42.443 nr_hugepages=1536 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.443 resv_hugepages=0 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.443 surplus_hugepages=0 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.443 anon_hugepages=0 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44741040 kB' 'MemAvailable: 48243184 kB' 'Buffers: 2704 kB' 'Cached: 10297644 kB' 'SwapCached: 0 kB' 'Active: 7308432 kB' 'Inactive: 3506596 kB' 'Active(anon): 6913840 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517896 kB' 'Mapped: 201796 kB' 'Shmem: 6399160 kB' 'KReclaimable: 189592 kB' 'Slab: 556772 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367180 kB' 'KernelStack: 12752 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8018532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.443 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.444 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.445 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22574620 kB' 'MemUsed: 10302320 kB' 'SwapCached: 0 kB' 'Active: 4823308 kB' 'Inactive: 3263864 kB' 'Active(anon): 4634736 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7799752 kB' 'Mapped: 64056 kB' 'AnonPages: 290584 kB' 'Shmem: 4347316 kB' 'KernelStack: 7704 kB' 'PageTables: 4672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 309356 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.446 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22166328 kB' 'MemUsed: 5498424 kB' 'SwapCached: 0 kB' 'Active: 2485612 kB' 'Inactive: 242732 kB' 'Active(anon): 2279592 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242732 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2500636 kB' 'Mapped: 137740 kB' 'AnonPages: 227808 kB' 'Shmem: 2051884 kB' 'KernelStack: 5080 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76356 kB' 'Slab: 247416 kB' 'SReclaimable: 76356 kB' 'SUnreclaim: 171060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.447 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.448 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.708 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:42.709 node0=512 expecting 512 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:42.709 node1=1024 expecting 1024 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:42.709 00:02:42.709 real 0m1.443s 00:02:42.709 user 0m0.604s 00:02:42.709 sys 0m0.801s 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:42.709 12:55:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:42.709 ************************************ 00:02:42.709 END TEST custom_alloc 00:02:42.709 ************************************ 00:02:42.709 12:55:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:42.709 12:55:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:42.709 12:55:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:42.709 12:55:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.709 12:55:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:42.709 ************************************ 00:02:42.709 START TEST no_shrink_alloc 00:02:42.709 ************************************ 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.709 12:55:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.655 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.655 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.655 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.655 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.655 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.655 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.655 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.655 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.655 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.655 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.655 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.655 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.655 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.655 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.655 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.655 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.655 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.921 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45691852 kB' 'MemAvailable: 49193996 kB' 'Buffers: 2704 kB' 'Cached: 10297732 kB' 'SwapCached: 0 kB' 'Active: 7310056 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915464 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519380 kB' 'Mapped: 202864 kB' 'Shmem: 6399248 kB' 'KReclaimable: 189592 kB' 'Slab: 556800 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367208 kB' 'KernelStack: 12816 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8053492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.922 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45691964 kB' 'MemAvailable: 49194108 kB' 'Buffers: 2704 kB' 'Cached: 10297732 kB' 'SwapCached: 0 kB' 'Active: 7309952 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915360 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519316 kB' 'Mapped: 202804 kB' 'Shmem: 6399248 kB' 'KReclaimable: 189592 kB' 'Slab: 556760 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367168 kB' 'KernelStack: 12848 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8053508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.923 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.924 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45692500 kB' 'MemAvailable: 49194644 kB' 'Buffers: 2704 kB' 'Cached: 10297752 kB' 'SwapCached: 0 kB' 'Active: 7309956 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915364 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519288 kB' 'Mapped: 202804 kB' 'Shmem: 6399268 kB' 'KReclaimable: 189592 kB' 'Slab: 556852 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367260 kB' 'KernelStack: 12848 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8053532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.925 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.926 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.927 nr_hugepages=1024 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.927 resv_hugepages=0 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.927 surplus_hugepages=0 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.927 anon_hugepages=0 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45691744 kB' 'MemAvailable: 49193888 kB' 'Buffers: 2704 kB' 'Cached: 10297772 kB' 'SwapCached: 0 kB' 'Active: 7309968 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915376 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519288 kB' 'Mapped: 202804 kB' 'Shmem: 6399288 kB' 'KReclaimable: 189592 kB' 'Slab: 556852 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367260 kB' 'KernelStack: 12848 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8053552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.927 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.928 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21530200 kB' 'MemUsed: 11346740 kB' 'SwapCached: 0 kB' 'Active: 4823212 kB' 'Inactive: 3263864 kB' 'Active(anon): 4634640 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7799756 kB' 'Mapped: 64380 kB' 'AnonPages: 290416 kB' 'Shmem: 4347320 kB' 'KernelStack: 7672 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 309412 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.929 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.930 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:43.931 node0=1024 expecting 1024 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.931 12:55:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.866 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.866 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.866 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.866 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.866 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.866 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.866 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.866 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.866 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.866 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.866 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.866 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.866 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.866 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.866 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.866 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.866 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:45.130 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45692356 kB' 'MemAvailable: 49194500 kB' 'Buffers: 2704 kB' 'Cached: 10297844 kB' 'SwapCached: 0 kB' 'Active: 7310356 kB' 'Inactive: 3506596 kB' 'Active(anon): 6915764 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519132 kB' 'Mapped: 202868 kB' 'Shmem: 6399360 kB' 'KReclaimable: 189592 kB' 'Slab: 556752 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367160 kB' 'KernelStack: 12848 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8053608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.130 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45702660 kB' 'MemAvailable: 49204804 kB' 'Buffers: 2704 kB' 'Cached: 10297848 kB' 'SwapCached: 0 kB' 'Active: 7311032 kB' 'Inactive: 3506596 kB' 'Active(anon): 6916440 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520260 kB' 'Mapped: 203248 kB' 'Shmem: 6399364 kB' 'KReclaimable: 189592 kB' 'Slab: 556752 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367160 kB' 'KernelStack: 12912 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8055244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.131 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45698740 kB' 'MemAvailable: 49200884 kB' 'Buffers: 2704 kB' 'Cached: 10297868 kB' 'SwapCached: 0 kB' 'Active: 7314404 kB' 'Inactive: 3506596 kB' 'Active(anon): 6919812 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523608 kB' 'Mapped: 203248 kB' 'Shmem: 6399384 kB' 'KReclaimable: 189592 kB' 'Slab: 556840 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367248 kB' 'KernelStack: 12880 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.132 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.133 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:45.134 nr_hugepages=1024 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:45.134 resv_hugepages=0 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:45.134 surplus_hugepages=0 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:45.134 anon_hugepages=0 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45695112 kB' 'MemAvailable: 49197256 kB' 'Buffers: 2704 kB' 'Cached: 10297888 kB' 'SwapCached: 0 kB' 'Active: 7315792 kB' 'Inactive: 3506596 kB' 'Active(anon): 6921200 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524988 kB' 'Mapped: 203728 kB' 'Shmem: 6399404 kB' 'KReclaimable: 189592 kB' 'Slab: 556840 kB' 'SReclaimable: 189592 kB' 'SUnreclaim: 367248 kB' 'KernelStack: 12864 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8059788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.134 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21543164 kB' 'MemUsed: 11333776 kB' 'SwapCached: 0 kB' 'Active: 4823256 kB' 'Inactive: 3263864 kB' 'Active(anon): 4634684 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3263864 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7799756 kB' 'Mapped: 64820 kB' 'AnonPages: 290472 kB' 'Shmem: 4347320 kB' 'KernelStack: 7688 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 309424 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.135 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:45.136 node0=1024 expecting 1024 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:45.136 00:02:45.136 real 0m2.625s 00:02:45.136 user 0m1.074s 00:02:45.136 sys 0m1.468s 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:45.136 12:55:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:45.136 ************************************ 00:02:45.136 END TEST no_shrink_alloc 00:02:45.136 ************************************ 00:02:45.394 12:55:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:45.394 12:55:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:45.394 00:02:45.394 real 0m11.069s 00:02:45.394 user 0m4.247s 00:02:45.394 sys 0m5.717s 00:02:45.394 12:55:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:45.394 12:55:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:45.394 ************************************ 00:02:45.394 END TEST hugepages 00:02:45.394 ************************************ 00:02:45.394 12:55:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:45.394 12:55:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:45.394 12:55:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.394 12:55:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.394 12:55:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:45.394 ************************************ 00:02:45.394 START TEST driver 00:02:45.394 ************************************ 00:02:45.394 12:55:06 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:45.394 * Looking for test storage... 00:02:45.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.394 12:55:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:45.394 12:55:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.394 12:55:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.927 12:55:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:47.927 12:55:09 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:47.927 12:55:09 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.927 12:55:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:47.927 ************************************ 00:02:47.927 START TEST guess_driver 00:02:47.927 ************************************ 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:47.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:47.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:47.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:47.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:47.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:47.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:47.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:47.927 Looking for driver=vfio-pci 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.927 12:55:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.306 12:55:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.245 12:55:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.245 12:55:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.245 12:55:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.245 12:55:11 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:50.245 12:55:11 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:50.245 12:55:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.245 12:55:11 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.780 00:02:52.780 real 0m4.801s 00:02:52.780 user 0m1.115s 00:02:52.780 sys 0m1.789s 00:02:52.780 12:55:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.780 12:55:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:52.780 ************************************ 00:02:52.780 END TEST guess_driver 00:02:52.780 ************************************ 00:02:52.780 12:55:14 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:02:52.780 00:02:52.780 real 0m7.363s 00:02:52.780 user 0m1.662s 00:02:52.780 sys 0m2.850s 00:02:52.780 12:55:14 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.780 12:55:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:52.780 ************************************ 00:02:52.780 END TEST driver 00:02:52.780 ************************************ 00:02:52.780 12:55:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:52.780 12:55:14 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:52.780 12:55:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.780 12:55:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.780 12:55:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:52.780 ************************************ 00:02:52.780 START TEST devices 00:02:52.780 ************************************ 00:02:52.780 12:55:14 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:52.780 * Looking for test storage... 00:02:52.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.780 12:55:14 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:52.780 12:55:14 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:52.780 12:55:14 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.780 12:55:14 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.158 12:55:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:54.159 12:55:15 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:54.159 No valid GPT data, bailing 00:02:54.159 12:55:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:54.159 12:55:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:54.159 12:55:15 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:54.159 12:55:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.159 12:55:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:54.159 ************************************ 00:02:54.159 START TEST nvme_mount 00:02:54.159 ************************************ 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:54.159 12:55:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:55.539 Creating new GPT entries in memory. 00:02:55.539 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:55.539 other utilities. 00:02:55.539 12:55:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:55.539 12:55:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:55.539 12:55:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:55.539 12:55:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:55.539 12:55:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:56.476 Creating new GPT entries in memory. 00:02:56.476 The operation has completed successfully. 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3695265 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.476 12:55:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.412 12:55:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:57.672 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:57.672 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:57.931 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:57.932 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:57.932 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:57.932 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.932 12:55:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.307 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.308 12:55:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.265 12:55:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:00.548 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:00.548 00:03:00.548 real 0m6.242s 00:03:00.548 user 0m1.519s 00:03:00.548 sys 0m2.307s 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:00.548 12:55:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:00.548 ************************************ 00:03:00.548 END TEST nvme_mount 00:03:00.548 ************************************ 00:03:00.548 12:55:22 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:00.548 12:55:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:00.548 12:55:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.548 12:55:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.548 12:55:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:00.548 ************************************ 00:03:00.548 START TEST dm_mount 00:03:00.548 ************************************ 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:00.548 12:55:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:01.487 Creating new GPT entries in memory. 00:03:01.487 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:01.487 other utilities. 00:03:01.487 12:55:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:01.487 12:55:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.487 12:55:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:01.487 12:55:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:01.487 12:55:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:02.446 Creating new GPT entries in memory. 00:03:02.446 The operation has completed successfully. 00:03:02.446 12:55:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:02.446 12:55:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.446 12:55:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:02.446 12:55:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:02.446 12:55:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:03.828 The operation has completed successfully. 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3697655 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.828 12:55:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:04.767 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.027 12:55:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:05.964 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:05.964 00:03:05.964 real 0m5.538s 00:03:05.964 user 0m0.929s 00:03:05.964 sys 0m1.446s 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.964 12:55:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:05.964 ************************************ 00:03:05.964 END TEST dm_mount 00:03:05.964 ************************************ 00:03:06.222 12:55:27 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:06.222 12:55:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:06.222 12:55:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:06.222 12:55:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.222 12:55:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.222 12:55:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:06.222 12:55:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:06.222 12:55:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:06.481 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:06.481 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:06.481 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:06.481 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:06.481 12:55:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:06.481 12:55:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:06.481 12:55:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:06.481 12:55:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.481 12:55:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:06.481 12:55:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:06.481 12:55:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:06.481 00:03:06.481 real 0m13.648s 00:03:06.481 user 0m3.067s 00:03:06.481 sys 0m4.764s 00:03:06.481 12:55:27 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.481 12:55:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:06.481 ************************************ 00:03:06.481 END TEST devices 00:03:06.481 ************************************ 00:03:06.481 12:55:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:06.481 00:03:06.481 real 0m42.519s 00:03:06.481 user 0m12.204s 00:03:06.481 sys 0m18.593s 00:03:06.481 12:55:27 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.481 12:55:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.481 ************************************ 00:03:06.481 END TEST setup.sh 00:03:06.481 ************************************ 00:03:06.481 12:55:27 -- common/autotest_common.sh@1142 -- # return 0 00:03:06.481 12:55:27 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:07.417 Hugepages 00:03:07.417 node hugesize free / total 00:03:07.417 node0 1048576kB 0 / 0 00:03:07.417 node0 2048kB 2048 / 2048 00:03:07.417 node1 1048576kB 0 / 0 00:03:07.417 node1 2048kB 0 / 0 00:03:07.417 00:03:07.417 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.417 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:07.417 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:07.417 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:07.417 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:07.417 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:07.417 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:07.417 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:07.417 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:07.417 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:07.676 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:07.676 12:55:29 -- spdk/autotest.sh@130 -- # uname -s 00:03:07.676 12:55:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:07.676 12:55:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:07.676 12:55:29 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.610 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:08.610 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:08.610 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:08.610 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:08.610 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:08.610 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:08.610 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:08.610 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.610 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:08.868 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:08.868 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:08.868 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:08.868 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:08.868 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:08.868 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:08.868 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:09.806 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.806 12:55:31 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:10.741 12:55:32 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:10.741 12:55:32 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:10.741 12:55:32 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:10.741 12:55:32 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:10.741 12:55:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:10.741 12:55:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:10.741 12:55:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:10.741 12:55:32 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:10.741 12:55:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:11.000 12:55:32 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:11.000 12:55:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:11.000 12:55:32 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.935 Waiting for block devices as requested 00:03:11.935 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:12.193 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:12.193 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:12.193 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:12.451 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:12.451 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:12.451 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:12.451 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:12.712 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:12.712 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:12.712 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:12.712 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:12.973 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:12.973 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:12.973 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:12.973 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:13.232 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:13.232 12:55:34 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:13.232 12:55:34 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:13.232 12:55:34 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:13.232 12:55:34 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:13.232 12:55:34 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:13.232 12:55:34 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:13.232 12:55:34 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:13.232 12:55:34 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:13.232 12:55:34 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:13.232 12:55:34 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:13.232 12:55:34 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:13.232 12:55:34 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:13.232 12:55:34 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:13.232 12:55:34 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:13.232 12:55:34 -- common/autotest_common.sh@1557 -- # continue 00:03:13.232 12:55:34 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:13.232 12:55:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:13.232 12:55:34 -- common/autotest_common.sh@10 -- # set +x 00:03:13.232 12:55:34 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:13.232 12:55:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:13.232 12:55:34 -- common/autotest_common.sh@10 -- # set +x 00:03:13.232 12:55:34 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.610 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.610 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.610 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.610 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.610 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.610 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.610 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.610 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.610 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.597 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.597 12:55:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:15.597 12:55:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:15.597 12:55:37 -- common/autotest_common.sh@10 -- # set +x 00:03:15.597 12:55:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:15.597 12:55:37 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:15.598 12:55:37 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:15.598 12:55:37 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:15.598 12:55:37 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:15.598 12:55:37 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:15.598 12:55:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:15.598 12:55:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:15.598 12:55:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:15.598 12:55:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:15.598 12:55:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:15.855 12:55:37 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:15.855 12:55:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:15.855 12:55:37 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:15.855 12:55:37 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:15.855 12:55:37 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:15.855 12:55:37 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:15.856 12:55:37 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:15.856 12:55:37 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:15.856 12:55:37 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:15.856 12:55:37 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3702827 00:03:15.856 12:55:37 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:15.856 12:55:37 -- common/autotest_common.sh@1598 -- # waitforlisten 3702827 00:03:15.856 12:55:37 -- common/autotest_common.sh@829 -- # '[' -z 3702827 ']' 00:03:15.856 12:55:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:15.856 12:55:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:15.856 12:55:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:15.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:15.856 12:55:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:15.856 12:55:37 -- common/autotest_common.sh@10 -- # set +x 00:03:15.856 [2024-07-15 12:55:37.401655] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:15.856 [2024-07-15 12:55:37.401758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702827 ] 00:03:15.856 EAL: No free 2048 kB hugepages reported on node 1 00:03:15.856 [2024-07-15 12:55:37.464119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:16.114 [2024-07-15 12:55:37.579929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:16.680 12:55:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:16.680 12:55:38 -- common/autotest_common.sh@862 -- # return 0 00:03:16.680 12:55:38 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:16.680 12:55:38 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:16.680 12:55:38 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:19.988 nvme0n1 00:03:19.988 12:55:41 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:19.988 [2024-07-15 12:55:41.637648] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:19.988 [2024-07-15 12:55:41.637695] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:19.988 request: 00:03:19.988 { 00:03:19.988 "nvme_ctrlr_name": "nvme0", 00:03:19.988 "password": "test", 00:03:19.988 "method": "bdev_nvme_opal_revert", 00:03:19.988 "req_id": 1 00:03:19.988 } 00:03:19.988 Got JSON-RPC error response 00:03:19.988 response: 00:03:19.988 { 00:03:19.988 "code": -32603, 00:03:19.988 "message": "Internal error" 00:03:19.988 } 00:03:19.988 12:55:41 -- common/autotest_common.sh@1604 -- # true 00:03:19.988 12:55:41 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:19.988 12:55:41 -- common/autotest_common.sh@1608 -- # killprocess 3702827 00:03:19.988 12:55:41 -- common/autotest_common.sh@948 -- # '[' -z 3702827 ']' 00:03:19.988 12:55:41 -- common/autotest_common.sh@952 -- # kill -0 3702827 00:03:19.988 12:55:41 -- common/autotest_common.sh@953 -- # uname 00:03:19.988 12:55:41 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:19.988 12:55:41 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3702827 00:03:19.988 12:55:41 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:19.988 12:55:41 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:19.988 12:55:41 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3702827' 00:03:19.988 killing process with pid 3702827 00:03:19.988 12:55:41 -- common/autotest_common.sh@967 -- # kill 3702827 00:03:19.988 12:55:41 -- common/autotest_common.sh@972 -- # wait 3702827 00:03:21.892 12:55:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:21.892 12:55:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:21.892 12:55:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:21.892 12:55:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:21.892 12:55:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:21.892 12:55:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:21.892 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:03:21.892 12:55:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:21.892 12:55:43 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:21.892 12:55:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.892 12:55:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.892 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:03:21.892 ************************************ 00:03:21.892 START TEST env 00:03:21.892 ************************************ 00:03:21.892 12:55:43 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:21.892 * Looking for test storage... 00:03:21.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:21.892 12:55:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:21.892 12:55:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.892 12:55:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.892 12:55:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.150 ************************************ 00:03:22.150 START TEST env_memory 00:03:22.150 ************************************ 00:03:22.150 12:55:43 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:22.150 00:03:22.150 00:03:22.150 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.150 http://cunit.sourceforge.net/ 00:03:22.150 00:03:22.150 00:03:22.150 Suite: memory 00:03:22.150 Test: alloc and free memory map ...[2024-07-15 12:55:43.640219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:22.150 passed 00:03:22.150 Test: mem map translation ...[2024-07-15 12:55:43.660187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:22.150 [2024-07-15 12:55:43.660208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:22.150 [2024-07-15 12:55:43.660264] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:22.150 [2024-07-15 12:55:43.660275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:22.150 passed 00:03:22.150 Test: mem map registration ...[2024-07-15 12:55:43.700678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:22.150 [2024-07-15 12:55:43.700698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:22.150 passed 00:03:22.150 Test: mem map adjacent registrations ...passed 00:03:22.150 00:03:22.150 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.150 suites 1 1 n/a 0 0 00:03:22.150 tests 4 4 4 0 0 00:03:22.150 asserts 152 152 152 0 n/a 00:03:22.150 00:03:22.150 Elapsed time = 0.140 seconds 00:03:22.150 00:03:22.150 real 0m0.148s 00:03:22.150 user 0m0.143s 00:03:22.150 sys 0m0.005s 00:03:22.150 12:55:43 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.150 12:55:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:22.150 ************************************ 00:03:22.150 END TEST env_memory 00:03:22.150 ************************************ 00:03:22.150 12:55:43 env -- common/autotest_common.sh@1142 -- # return 0 00:03:22.150 12:55:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:22.150 12:55:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.150 12:55:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.150 12:55:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.150 ************************************ 00:03:22.150 START TEST env_vtophys 00:03:22.150 ************************************ 00:03:22.150 12:55:43 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:22.150 EAL: lib.eal log level changed from notice to debug 00:03:22.150 EAL: Detected lcore 0 as core 0 on socket 0 00:03:22.150 EAL: Detected lcore 1 as core 1 on socket 0 00:03:22.150 EAL: Detected lcore 2 as core 2 on socket 0 00:03:22.150 EAL: Detected lcore 3 as core 3 on socket 0 00:03:22.150 EAL: Detected lcore 4 as core 4 on socket 0 00:03:22.150 EAL: Detected lcore 5 as core 5 on socket 0 00:03:22.150 EAL: Detected lcore 6 as core 8 on socket 0 00:03:22.150 EAL: Detected lcore 7 as core 9 on socket 0 00:03:22.150 EAL: Detected lcore 8 as core 10 on socket 0 00:03:22.150 EAL: Detected lcore 9 as core 11 on socket 0 00:03:22.150 EAL: Detected lcore 10 as core 12 on socket 0 00:03:22.150 EAL: Detected lcore 11 as core 13 on socket 0 00:03:22.150 EAL: Detected lcore 12 as core 0 on socket 1 00:03:22.150 EAL: Detected lcore 13 as core 1 on socket 1 00:03:22.150 EAL: Detected lcore 14 as core 2 on socket 1 00:03:22.150 EAL: Detected lcore 15 as core 3 on socket 1 00:03:22.150 EAL: Detected lcore 16 as core 4 on socket 1 00:03:22.150 EAL: Detected lcore 17 as core 5 on socket 1 00:03:22.150 EAL: Detected lcore 18 as core 8 on socket 1 00:03:22.150 EAL: Detected lcore 19 as core 9 on socket 1 00:03:22.150 EAL: Detected lcore 20 as core 10 on socket 1 00:03:22.150 EAL: Detected lcore 21 as core 11 on socket 1 00:03:22.150 EAL: Detected lcore 22 as core 12 on socket 1 00:03:22.150 EAL: Detected lcore 23 as core 13 on socket 1 00:03:22.150 EAL: Detected lcore 24 as core 0 on socket 0 00:03:22.150 EAL: Detected lcore 25 as core 1 on socket 0 00:03:22.150 EAL: Detected lcore 26 as core 2 on socket 0 00:03:22.150 EAL: Detected lcore 27 as core 3 on socket 0 00:03:22.150 EAL: Detected lcore 28 as core 4 on socket 0 00:03:22.150 EAL: Detected lcore 29 as core 5 on socket 0 00:03:22.150 EAL: Detected lcore 30 as core 8 on socket 0 00:03:22.150 EAL: Detected lcore 31 as core 9 on socket 0 00:03:22.150 EAL: Detected lcore 32 as core 10 on socket 0 00:03:22.150 EAL: Detected lcore 33 as core 11 on socket 0 00:03:22.150 EAL: Detected lcore 34 as core 12 on socket 0 00:03:22.150 EAL: Detected lcore 35 as core 13 on socket 0 00:03:22.150 EAL: Detected lcore 36 as core 0 on socket 1 00:03:22.150 EAL: Detected lcore 37 as core 1 on socket 1 00:03:22.150 EAL: Detected lcore 38 as core 2 on socket 1 00:03:22.150 EAL: Detected lcore 39 as core 3 on socket 1 00:03:22.150 EAL: Detected lcore 40 as core 4 on socket 1 00:03:22.150 EAL: Detected lcore 41 as core 5 on socket 1 00:03:22.150 EAL: Detected lcore 42 as core 8 on socket 1 00:03:22.150 EAL: Detected lcore 43 as core 9 on socket 1 00:03:22.150 EAL: Detected lcore 44 as core 10 on socket 1 00:03:22.150 EAL: Detected lcore 45 as core 11 on socket 1 00:03:22.150 EAL: Detected lcore 46 as core 12 on socket 1 00:03:22.150 EAL: Detected lcore 47 as core 13 on socket 1 00:03:22.150 EAL: Maximum logical cores by configuration: 128 00:03:22.150 EAL: Detected CPU lcores: 48 00:03:22.150 EAL: Detected NUMA nodes: 2 00:03:22.150 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:22.150 EAL: Detected shared linkage of DPDK 00:03:22.150 EAL: No shared files mode enabled, IPC will be disabled 00:03:22.150 EAL: Bus pci wants IOVA as 'DC' 00:03:22.150 EAL: Buses did not request a specific IOVA mode. 00:03:22.150 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:22.150 EAL: Selected IOVA mode 'VA' 00:03:22.150 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.150 EAL: Probing VFIO support... 00:03:22.150 EAL: IOMMU type 1 (Type 1) is supported 00:03:22.150 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:22.150 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:22.150 EAL: VFIO support initialized 00:03:22.150 EAL: Ask a virtual area of 0x2e000 bytes 00:03:22.150 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:22.150 EAL: Setting up physically contiguous memory... 00:03:22.150 EAL: Setting maximum number of open files to 524288 00:03:22.150 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:22.150 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:22.150 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:22.150 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.150 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:22.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.150 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.150 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:22.150 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:22.150 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.150 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:22.150 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.150 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.150 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:22.150 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:22.150 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.150 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:22.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.151 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.151 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:22.151 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:22.151 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.151 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:22.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:22.151 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.151 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:22.151 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:22.151 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:22.151 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.151 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:22.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.151 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.151 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:22.151 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:22.151 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.151 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:22.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.151 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.151 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:22.151 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:22.151 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.151 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:22.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.151 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.151 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:22.151 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:22.151 EAL: Ask a virtual area of 0x61000 bytes 00:03:22.151 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:22.151 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:22.151 EAL: Ask a virtual area of 0x400000000 bytes 00:03:22.151 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:22.151 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:22.151 EAL: Hugepages will be freed exactly as allocated. 00:03:22.151 EAL: No shared files mode enabled, IPC is disabled 00:03:22.151 EAL: No shared files mode enabled, IPC is disabled 00:03:22.151 EAL: TSC frequency is ~2700000 KHz 00:03:22.151 EAL: Main lcore 0 is ready (tid=7fe3542c8a00;cpuset=[0]) 00:03:22.151 EAL: Trying to obtain current memory policy. 00:03:22.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.151 EAL: Restoring previous memory policy: 0 00:03:22.151 EAL: request: mp_malloc_sync 00:03:22.151 EAL: No shared files mode enabled, IPC is disabled 00:03:22.151 EAL: Heap on socket 0 was expanded by 2MB 00:03:22.151 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:22.410 EAL: Mem event callback 'spdk:(nil)' registered 00:03:22.410 00:03:22.410 00:03:22.410 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.410 http://cunit.sourceforge.net/ 00:03:22.410 00:03:22.410 00:03:22.410 Suite: components_suite 00:03:22.410 Test: vtophys_malloc_test ...passed 00:03:22.410 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.410 EAL: Restoring previous memory policy: 4 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was expanded by 4MB 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was shrunk by 4MB 00:03:22.410 EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.410 EAL: Restoring previous memory policy: 4 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was expanded by 6MB 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was shrunk by 6MB 00:03:22.410 EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.410 EAL: Restoring previous memory policy: 4 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was expanded by 10MB 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was shrunk by 10MB 00:03:22.410 EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.410 EAL: Restoring previous memory policy: 4 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was expanded by 18MB 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was shrunk by 18MB 00:03:22.410 EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.410 EAL: Restoring previous memory policy: 4 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was expanded by 34MB 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was shrunk by 34MB 00:03:22.410 EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.410 EAL: Restoring previous memory policy: 4 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was expanded by 66MB 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was shrunk by 66MB 00:03:22.410 EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.410 EAL: Restoring previous memory policy: 4 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was expanded by 130MB 00:03:22.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.410 EAL: request: mp_malloc_sync 00:03:22.410 EAL: No shared files mode enabled, IPC is disabled 00:03:22.410 EAL: Heap on socket 0 was shrunk by 130MB 00:03:22.410 EAL: Trying to obtain current memory policy. 00:03:22.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.671 EAL: Restoring previous memory policy: 4 00:03:22.671 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.671 EAL: request: mp_malloc_sync 00:03:22.671 EAL: No shared files mode enabled, IPC is disabled 00:03:22.671 EAL: Heap on socket 0 was expanded by 258MB 00:03:22.671 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.671 EAL: request: mp_malloc_sync 00:03:22.671 EAL: No shared files mode enabled, IPC is disabled 00:03:22.671 EAL: Heap on socket 0 was shrunk by 258MB 00:03:22.671 EAL: Trying to obtain current memory policy. 00:03:22.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.930 EAL: Restoring previous memory policy: 4 00:03:22.930 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.930 EAL: request: mp_malloc_sync 00:03:22.930 EAL: No shared files mode enabled, IPC is disabled 00:03:22.930 EAL: Heap on socket 0 was expanded by 514MB 00:03:22.931 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.931 EAL: request: mp_malloc_sync 00:03:22.931 EAL: No shared files mode enabled, IPC is disabled 00:03:22.931 EAL: Heap on socket 0 was shrunk by 514MB 00:03:22.931 EAL: Trying to obtain current memory policy. 00:03:22.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.188 EAL: Restoring previous memory policy: 4 00:03:23.188 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.188 EAL: request: mp_malloc_sync 00:03:23.188 EAL: No shared files mode enabled, IPC is disabled 00:03:23.188 EAL: Heap on socket 0 was expanded by 1026MB 00:03:23.447 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.705 EAL: request: mp_malloc_sync 00:03:23.705 EAL: No shared files mode enabled, IPC is disabled 00:03:23.705 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:23.705 passed 00:03:23.705 00:03:23.705 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.705 suites 1 1 n/a 0 0 00:03:23.705 tests 2 2 2 0 0 00:03:23.705 asserts 497 497 497 0 n/a 00:03:23.705 00:03:23.705 Elapsed time = 1.394 seconds 00:03:23.706 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.706 EAL: request: mp_malloc_sync 00:03:23.706 EAL: No shared files mode enabled, IPC is disabled 00:03:23.706 EAL: Heap on socket 0 was shrunk by 2MB 00:03:23.706 EAL: No shared files mode enabled, IPC is disabled 00:03:23.706 EAL: No shared files mode enabled, IPC is disabled 00:03:23.706 EAL: No shared files mode enabled, IPC is disabled 00:03:23.706 00:03:23.706 real 0m1.513s 00:03:23.706 user 0m0.870s 00:03:23.706 sys 0m0.613s 00:03:23.706 12:55:45 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.706 12:55:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:23.706 ************************************ 00:03:23.706 END TEST env_vtophys 00:03:23.706 ************************************ 00:03:23.706 12:55:45 env -- common/autotest_common.sh@1142 -- # return 0 00:03:23.706 12:55:45 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:23.706 12:55:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.706 12:55:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.706 12:55:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:23.706 ************************************ 00:03:23.706 START TEST env_pci 00:03:23.706 ************************************ 00:03:23.706 12:55:45 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:23.706 00:03:23.706 00:03:23.706 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.706 http://cunit.sourceforge.net/ 00:03:23.706 00:03:23.706 00:03:23.706 Suite: pci 00:03:23.706 Test: pci_hook ...[2024-07-15 12:55:45.373819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3703843 has claimed it 00:03:23.706 EAL: Cannot find device (10000:00:01.0) 00:03:23.706 EAL: Failed to attach device on primary process 00:03:23.706 passed 00:03:23.706 00:03:23.706 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.706 suites 1 1 n/a 0 0 00:03:23.706 tests 1 1 1 0 0 00:03:23.706 asserts 25 25 25 0 n/a 00:03:23.706 00:03:23.706 Elapsed time = 0.021 seconds 00:03:23.706 00:03:23.706 real 0m0.035s 00:03:23.706 user 0m0.005s 00:03:23.706 sys 0m0.030s 00:03:23.706 12:55:45 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.706 12:55:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:23.706 ************************************ 00:03:23.706 END TEST env_pci 00:03:23.706 ************************************ 00:03:23.965 12:55:45 env -- common/autotest_common.sh@1142 -- # return 0 00:03:23.965 12:55:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:23.965 12:55:45 env -- env/env.sh@15 -- # uname 00:03:23.965 12:55:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:23.965 12:55:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:23.965 12:55:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:23.965 12:55:45 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:23.965 12:55:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.965 12:55:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:23.965 ************************************ 00:03:23.965 START TEST env_dpdk_post_init 00:03:23.965 ************************************ 00:03:23.965 12:55:45 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:23.965 EAL: Detected CPU lcores: 48 00:03:23.965 EAL: Detected NUMA nodes: 2 00:03:23.965 EAL: Detected shared linkage of DPDK 00:03:23.965 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:23.965 EAL: Selected IOVA mode 'VA' 00:03:23.965 EAL: No free 2048 kB hugepages reported on node 1 00:03:23.965 EAL: VFIO support initialized 00:03:23.965 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:23.965 EAL: Using IOMMU type 1 (Type 1) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:23.965 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:24.225 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:24.225 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:24.225 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:24.225 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:24.225 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:24.225 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:24.225 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:24.793 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:28.085 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:28.085 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:28.344 Starting DPDK initialization... 00:03:28.344 Starting SPDK post initialization... 00:03:28.344 SPDK NVMe probe 00:03:28.344 Attaching to 0000:88:00.0 00:03:28.344 Attached to 0000:88:00.0 00:03:28.344 Cleaning up... 00:03:28.344 00:03:28.344 real 0m4.381s 00:03:28.344 user 0m3.266s 00:03:28.344 sys 0m0.176s 00:03:28.344 12:55:49 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.344 12:55:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:28.344 ************************************ 00:03:28.344 END TEST env_dpdk_post_init 00:03:28.344 ************************************ 00:03:28.344 12:55:49 env -- common/autotest_common.sh@1142 -- # return 0 00:03:28.344 12:55:49 env -- env/env.sh@26 -- # uname 00:03:28.344 12:55:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:28.344 12:55:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:28.344 12:55:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.344 12:55:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.344 12:55:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.344 ************************************ 00:03:28.344 START TEST env_mem_callbacks 00:03:28.344 ************************************ 00:03:28.344 12:55:49 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:28.344 EAL: Detected CPU lcores: 48 00:03:28.344 EAL: Detected NUMA nodes: 2 00:03:28.344 EAL: Detected shared linkage of DPDK 00:03:28.344 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:28.344 EAL: Selected IOVA mode 'VA' 00:03:28.344 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.344 EAL: VFIO support initialized 00:03:28.344 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:28.344 00:03:28.344 00:03:28.344 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.344 http://cunit.sourceforge.net/ 00:03:28.344 00:03:28.344 00:03:28.344 Suite: memory 00:03:28.344 Test: test ... 00:03:28.344 register 0x200000200000 2097152 00:03:28.344 malloc 3145728 00:03:28.344 register 0x200000400000 4194304 00:03:28.344 buf 0x200000500000 len 3145728 PASSED 00:03:28.344 malloc 64 00:03:28.344 buf 0x2000004fff40 len 64 PASSED 00:03:28.344 malloc 4194304 00:03:28.344 register 0x200000800000 6291456 00:03:28.344 buf 0x200000a00000 len 4194304 PASSED 00:03:28.344 free 0x200000500000 3145728 00:03:28.344 free 0x2000004fff40 64 00:03:28.344 unregister 0x200000400000 4194304 PASSED 00:03:28.344 free 0x200000a00000 4194304 00:03:28.344 unregister 0x200000800000 6291456 PASSED 00:03:28.344 malloc 8388608 00:03:28.344 register 0x200000400000 10485760 00:03:28.344 buf 0x200000600000 len 8388608 PASSED 00:03:28.344 free 0x200000600000 8388608 00:03:28.344 unregister 0x200000400000 10485760 PASSED 00:03:28.344 passed 00:03:28.344 00:03:28.344 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.344 suites 1 1 n/a 0 0 00:03:28.344 tests 1 1 1 0 0 00:03:28.344 asserts 15 15 15 0 n/a 00:03:28.344 00:03:28.344 Elapsed time = 0.005 seconds 00:03:28.344 00:03:28.344 real 0m0.048s 00:03:28.344 user 0m0.011s 00:03:28.344 sys 0m0.036s 00:03:28.344 12:55:49 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.344 12:55:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:28.344 ************************************ 00:03:28.344 END TEST env_mem_callbacks 00:03:28.344 ************************************ 00:03:28.344 12:55:49 env -- common/autotest_common.sh@1142 -- # return 0 00:03:28.344 00:03:28.344 real 0m6.410s 00:03:28.344 user 0m4.420s 00:03:28.344 sys 0m1.038s 00:03:28.344 12:55:49 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.344 12:55:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.344 ************************************ 00:03:28.344 END TEST env 00:03:28.344 ************************************ 00:03:28.344 12:55:49 -- common/autotest_common.sh@1142 -- # return 0 00:03:28.344 12:55:49 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.344 12:55:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.344 12:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.344 12:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:28.344 ************************************ 00:03:28.344 START TEST rpc 00:03:28.344 ************************************ 00:03:28.344 12:55:49 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.345 * Looking for test storage... 00:03:28.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.345 12:55:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3704491 00:03:28.345 12:55:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:28.345 12:55:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.345 12:55:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3704491 00:03:28.345 12:55:50 rpc -- common/autotest_common.sh@829 -- # '[' -z 3704491 ']' 00:03:28.345 12:55:50 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.345 12:55:50 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:28.603 12:55:50 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.603 12:55:50 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:28.603 12:55:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.603 [2024-07-15 12:55:50.093036] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:28.603 [2024-07-15 12:55:50.093119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704491 ] 00:03:28.603 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.603 [2024-07-15 12:55:50.150673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.603 [2024-07-15 12:55:50.255788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:28.603 [2024-07-15 12:55:50.255846] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3704491' to capture a snapshot of events at runtime. 00:03:28.603 [2024-07-15 12:55:50.255859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:28.603 [2024-07-15 12:55:50.255869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:28.603 [2024-07-15 12:55:50.255902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3704491 for offline analysis/debug. 00:03:28.603 [2024-07-15 12:55:50.255935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.862 12:55:50 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:28.862 12:55:50 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:28.862 12:55:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.862 12:55:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.862 12:55:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:28.862 12:55:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:28.862 12:55:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.862 12:55:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.862 12:55:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.862 ************************************ 00:03:28.862 START TEST rpc_integrity 00:03:28.862 ************************************ 00:03:28.862 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:28.862 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.862 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:28.862 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.862 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:28.862 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.862 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.121 { 00:03:29.121 "name": "Malloc0", 00:03:29.121 "aliases": [ 00:03:29.121 "9e1bed3f-0543-46c8-8f18-eef3ad53a347" 00:03:29.121 ], 00:03:29.121 "product_name": "Malloc disk", 00:03:29.121 "block_size": 512, 00:03:29.121 "num_blocks": 16384, 00:03:29.121 "uuid": "9e1bed3f-0543-46c8-8f18-eef3ad53a347", 00:03:29.121 "assigned_rate_limits": { 00:03:29.121 "rw_ios_per_sec": 0, 00:03:29.121 "rw_mbytes_per_sec": 0, 00:03:29.121 "r_mbytes_per_sec": 0, 00:03:29.121 "w_mbytes_per_sec": 0 00:03:29.121 }, 00:03:29.121 "claimed": false, 00:03:29.121 "zoned": false, 00:03:29.121 "supported_io_types": { 00:03:29.121 "read": true, 00:03:29.121 "write": true, 00:03:29.121 "unmap": true, 00:03:29.121 "flush": true, 00:03:29.121 "reset": true, 00:03:29.121 "nvme_admin": false, 00:03:29.121 "nvme_io": false, 00:03:29.121 "nvme_io_md": false, 00:03:29.121 "write_zeroes": true, 00:03:29.121 "zcopy": true, 00:03:29.121 "get_zone_info": false, 00:03:29.121 "zone_management": false, 00:03:29.121 "zone_append": false, 00:03:29.121 "compare": false, 00:03:29.121 "compare_and_write": false, 00:03:29.121 "abort": true, 00:03:29.121 "seek_hole": false, 00:03:29.121 "seek_data": false, 00:03:29.121 "copy": true, 00:03:29.121 "nvme_iov_md": false 00:03:29.121 }, 00:03:29.121 "memory_domains": [ 00:03:29.121 { 00:03:29.121 "dma_device_id": "system", 00:03:29.121 "dma_device_type": 1 00:03:29.121 }, 00:03:29.121 { 00:03:29.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.121 "dma_device_type": 2 00:03:29.121 } 00:03:29.121 ], 00:03:29.121 "driver_specific": {} 00:03:29.121 } 00:03:29.121 ]' 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:29.121 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.121 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.121 [2024-07-15 12:55:50.647099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:29.121 [2024-07-15 12:55:50.647139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:29.122 [2024-07-15 12:55:50.647175] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21d8d50 00:03:29.122 [2024-07-15 12:55:50.647189] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:29.122 [2024-07-15 12:55:50.648805] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:29.122 [2024-07-15 12:55:50.648834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:29.122 Passthru0 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:29.122 { 00:03:29.122 "name": "Malloc0", 00:03:29.122 "aliases": [ 00:03:29.122 "9e1bed3f-0543-46c8-8f18-eef3ad53a347" 00:03:29.122 ], 00:03:29.122 "product_name": "Malloc disk", 00:03:29.122 "block_size": 512, 00:03:29.122 "num_blocks": 16384, 00:03:29.122 "uuid": "9e1bed3f-0543-46c8-8f18-eef3ad53a347", 00:03:29.122 "assigned_rate_limits": { 00:03:29.122 "rw_ios_per_sec": 0, 00:03:29.122 "rw_mbytes_per_sec": 0, 00:03:29.122 "r_mbytes_per_sec": 0, 00:03:29.122 "w_mbytes_per_sec": 0 00:03:29.122 }, 00:03:29.122 "claimed": true, 00:03:29.122 "claim_type": "exclusive_write", 00:03:29.122 "zoned": false, 00:03:29.122 "supported_io_types": { 00:03:29.122 "read": true, 00:03:29.122 "write": true, 00:03:29.122 "unmap": true, 00:03:29.122 "flush": true, 00:03:29.122 "reset": true, 00:03:29.122 "nvme_admin": false, 00:03:29.122 "nvme_io": false, 00:03:29.122 "nvme_io_md": false, 00:03:29.122 "write_zeroes": true, 00:03:29.122 "zcopy": true, 00:03:29.122 "get_zone_info": false, 00:03:29.122 "zone_management": false, 00:03:29.122 "zone_append": false, 00:03:29.122 "compare": false, 00:03:29.122 "compare_and_write": false, 00:03:29.122 "abort": true, 00:03:29.122 "seek_hole": false, 00:03:29.122 "seek_data": false, 00:03:29.122 "copy": true, 00:03:29.122 "nvme_iov_md": false 00:03:29.122 }, 00:03:29.122 "memory_domains": [ 00:03:29.122 { 00:03:29.122 "dma_device_id": "system", 00:03:29.122 "dma_device_type": 1 00:03:29.122 }, 00:03:29.122 { 00:03:29.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.122 "dma_device_type": 2 00:03:29.122 } 00:03:29.122 ], 00:03:29.122 "driver_specific": {} 00:03:29.122 }, 00:03:29.122 { 00:03:29.122 "name": "Passthru0", 00:03:29.122 "aliases": [ 00:03:29.122 "32fbd039-28dd-57d2-8605-d4221333a2da" 00:03:29.122 ], 00:03:29.122 "product_name": "passthru", 00:03:29.122 "block_size": 512, 00:03:29.122 "num_blocks": 16384, 00:03:29.122 "uuid": "32fbd039-28dd-57d2-8605-d4221333a2da", 00:03:29.122 "assigned_rate_limits": { 00:03:29.122 "rw_ios_per_sec": 0, 00:03:29.122 "rw_mbytes_per_sec": 0, 00:03:29.122 "r_mbytes_per_sec": 0, 00:03:29.122 "w_mbytes_per_sec": 0 00:03:29.122 }, 00:03:29.122 "claimed": false, 00:03:29.122 "zoned": false, 00:03:29.122 "supported_io_types": { 00:03:29.122 "read": true, 00:03:29.122 "write": true, 00:03:29.122 "unmap": true, 00:03:29.122 "flush": true, 00:03:29.122 "reset": true, 00:03:29.122 "nvme_admin": false, 00:03:29.122 "nvme_io": false, 00:03:29.122 "nvme_io_md": false, 00:03:29.122 "write_zeroes": true, 00:03:29.122 "zcopy": true, 00:03:29.122 "get_zone_info": false, 00:03:29.122 "zone_management": false, 00:03:29.122 "zone_append": false, 00:03:29.122 "compare": false, 00:03:29.122 "compare_and_write": false, 00:03:29.122 "abort": true, 00:03:29.122 "seek_hole": false, 00:03:29.122 "seek_data": false, 00:03:29.122 "copy": true, 00:03:29.122 "nvme_iov_md": false 00:03:29.122 }, 00:03:29.122 "memory_domains": [ 00:03:29.122 { 00:03:29.122 "dma_device_id": "system", 00:03:29.122 "dma_device_type": 1 00:03:29.122 }, 00:03:29.122 { 00:03:29.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.122 "dma_device_type": 2 00:03:29.122 } 00:03:29.122 ], 00:03:29.122 "driver_specific": { 00:03:29.122 "passthru": { 00:03:29.122 "name": "Passthru0", 00:03:29.122 "base_bdev_name": "Malloc0" 00:03:29.122 } 00:03:29.122 } 00:03:29.122 } 00:03:29.122 ]' 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:29.122 12:55:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:29.122 00:03:29.122 real 0m0.226s 00:03:29.122 user 0m0.149s 00:03:29.122 sys 0m0.020s 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.122 12:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 ************************************ 00:03:29.122 END TEST rpc_integrity 00:03:29.122 ************************************ 00:03:29.122 12:55:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:29.122 12:55:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:29.122 12:55:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.122 12:55:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.122 12:55:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.122 ************************************ 00:03:29.122 START TEST rpc_plugins 00:03:29.122 ************************************ 00:03:29.122 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:29.122 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:29.122 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.122 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:29.382 { 00:03:29.382 "name": "Malloc1", 00:03:29.382 "aliases": [ 00:03:29.382 "e90feabf-e6e1-4006-a044-9b9687431ff1" 00:03:29.382 ], 00:03:29.382 "product_name": "Malloc disk", 00:03:29.382 "block_size": 4096, 00:03:29.382 "num_blocks": 256, 00:03:29.382 "uuid": "e90feabf-e6e1-4006-a044-9b9687431ff1", 00:03:29.382 "assigned_rate_limits": { 00:03:29.382 "rw_ios_per_sec": 0, 00:03:29.382 "rw_mbytes_per_sec": 0, 00:03:29.382 "r_mbytes_per_sec": 0, 00:03:29.382 "w_mbytes_per_sec": 0 00:03:29.382 }, 00:03:29.382 "claimed": false, 00:03:29.382 "zoned": false, 00:03:29.382 "supported_io_types": { 00:03:29.382 "read": true, 00:03:29.382 "write": true, 00:03:29.382 "unmap": true, 00:03:29.382 "flush": true, 00:03:29.382 "reset": true, 00:03:29.382 "nvme_admin": false, 00:03:29.382 "nvme_io": false, 00:03:29.382 "nvme_io_md": false, 00:03:29.382 "write_zeroes": true, 00:03:29.382 "zcopy": true, 00:03:29.382 "get_zone_info": false, 00:03:29.382 "zone_management": false, 00:03:29.382 "zone_append": false, 00:03:29.382 "compare": false, 00:03:29.382 "compare_and_write": false, 00:03:29.382 "abort": true, 00:03:29.382 "seek_hole": false, 00:03:29.382 "seek_data": false, 00:03:29.382 "copy": true, 00:03:29.382 "nvme_iov_md": false 00:03:29.382 }, 00:03:29.382 "memory_domains": [ 00:03:29.382 { 00:03:29.382 "dma_device_id": "system", 00:03:29.382 "dma_device_type": 1 00:03:29.382 }, 00:03:29.382 { 00:03:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.382 "dma_device_type": 2 00:03:29.382 } 00:03:29.382 ], 00:03:29.382 "driver_specific": {} 00:03:29.382 } 00:03:29.382 ]' 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:29.382 12:55:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:29.382 00:03:29.382 real 0m0.112s 00:03:29.382 user 0m0.077s 00:03:29.382 sys 0m0.008s 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.382 12:55:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:29.382 ************************************ 00:03:29.382 END TEST rpc_plugins 00:03:29.382 ************************************ 00:03:29.382 12:55:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:29.382 12:55:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:29.382 12:55:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.382 12:55:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.382 12:55:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.382 ************************************ 00:03:29.382 START TEST rpc_trace_cmd_test 00:03:29.382 ************************************ 00:03:29.382 12:55:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:29.382 12:55:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:29.382 12:55:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:29.382 12:55:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.382 12:55:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:29.382 12:55:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.383 12:55:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:29.383 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3704491", 00:03:29.383 "tpoint_group_mask": "0x8", 00:03:29.383 "iscsi_conn": { 00:03:29.383 "mask": "0x2", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "scsi": { 00:03:29.383 "mask": "0x4", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "bdev": { 00:03:29.383 "mask": "0x8", 00:03:29.383 "tpoint_mask": "0xffffffffffffffff" 00:03:29.383 }, 00:03:29.383 "nvmf_rdma": { 00:03:29.383 "mask": "0x10", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "nvmf_tcp": { 00:03:29.383 "mask": "0x20", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "ftl": { 00:03:29.383 "mask": "0x40", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "blobfs": { 00:03:29.383 "mask": "0x80", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "dsa": { 00:03:29.383 "mask": "0x200", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "thread": { 00:03:29.383 "mask": "0x400", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "nvme_pcie": { 00:03:29.383 "mask": "0x800", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "iaa": { 00:03:29.383 "mask": "0x1000", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "nvme_tcp": { 00:03:29.383 "mask": "0x2000", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "bdev_nvme": { 00:03:29.383 "mask": "0x4000", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 }, 00:03:29.383 "sock": { 00:03:29.383 "mask": "0x8000", 00:03:29.383 "tpoint_mask": "0x0" 00:03:29.383 } 00:03:29.383 }' 00:03:29.383 12:55:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:29.383 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:29.383 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:29.383 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:29.383 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:29.642 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:29.642 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:29.642 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:29.643 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:29.643 12:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:29.643 00:03:29.643 real 0m0.193s 00:03:29.643 user 0m0.170s 00:03:29.643 sys 0m0.015s 00:03:29.643 12:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.643 12:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:29.643 ************************************ 00:03:29.643 END TEST rpc_trace_cmd_test 00:03:29.643 ************************************ 00:03:29.643 12:55:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:29.643 12:55:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:29.643 12:55:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:29.643 12:55:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:29.643 12:55:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.643 12:55:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.643 12:55:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.643 ************************************ 00:03:29.643 START TEST rpc_daemon_integrity 00:03:29.643 ************************************ 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.643 { 00:03:29.643 "name": "Malloc2", 00:03:29.643 "aliases": [ 00:03:29.643 "d770c7f7-de1e-4b52-a1fb-9956ad38ba3e" 00:03:29.643 ], 00:03:29.643 "product_name": "Malloc disk", 00:03:29.643 "block_size": 512, 00:03:29.643 "num_blocks": 16384, 00:03:29.643 "uuid": "d770c7f7-de1e-4b52-a1fb-9956ad38ba3e", 00:03:29.643 "assigned_rate_limits": { 00:03:29.643 "rw_ios_per_sec": 0, 00:03:29.643 "rw_mbytes_per_sec": 0, 00:03:29.643 "r_mbytes_per_sec": 0, 00:03:29.643 "w_mbytes_per_sec": 0 00:03:29.643 }, 00:03:29.643 "claimed": false, 00:03:29.643 "zoned": false, 00:03:29.643 "supported_io_types": { 00:03:29.643 "read": true, 00:03:29.643 "write": true, 00:03:29.643 "unmap": true, 00:03:29.643 "flush": true, 00:03:29.643 "reset": true, 00:03:29.643 "nvme_admin": false, 00:03:29.643 "nvme_io": false, 00:03:29.643 "nvme_io_md": false, 00:03:29.643 "write_zeroes": true, 00:03:29.643 "zcopy": true, 00:03:29.643 "get_zone_info": false, 00:03:29.643 "zone_management": false, 00:03:29.643 "zone_append": false, 00:03:29.643 "compare": false, 00:03:29.643 "compare_and_write": false, 00:03:29.643 "abort": true, 00:03:29.643 "seek_hole": false, 00:03:29.643 "seek_data": false, 00:03:29.643 "copy": true, 00:03:29.643 "nvme_iov_md": false 00:03:29.643 }, 00:03:29.643 "memory_domains": [ 00:03:29.643 { 00:03:29.643 "dma_device_id": "system", 00:03:29.643 "dma_device_type": 1 00:03:29.643 }, 00:03:29.643 { 00:03:29.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.643 "dma_device_type": 2 00:03:29.643 } 00:03:29.643 ], 00:03:29.643 "driver_specific": {} 00:03:29.643 } 00:03:29.643 ]' 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.643 [2024-07-15 12:55:51.320983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:29.643 [2024-07-15 12:55:51.321021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:29.643 [2024-07-15 12:55:51.321042] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21d9c00 00:03:29.643 [2024-07-15 12:55:51.321056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:29.643 [2024-07-15 12:55:51.322403] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:29.643 [2024-07-15 12:55:51.322432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:29.643 Passthru0 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.643 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:29.643 { 00:03:29.643 "name": "Malloc2", 00:03:29.643 "aliases": [ 00:03:29.643 "d770c7f7-de1e-4b52-a1fb-9956ad38ba3e" 00:03:29.643 ], 00:03:29.643 "product_name": "Malloc disk", 00:03:29.643 "block_size": 512, 00:03:29.643 "num_blocks": 16384, 00:03:29.643 "uuid": "d770c7f7-de1e-4b52-a1fb-9956ad38ba3e", 00:03:29.643 "assigned_rate_limits": { 00:03:29.643 "rw_ios_per_sec": 0, 00:03:29.643 "rw_mbytes_per_sec": 0, 00:03:29.643 "r_mbytes_per_sec": 0, 00:03:29.643 "w_mbytes_per_sec": 0 00:03:29.643 }, 00:03:29.643 "claimed": true, 00:03:29.643 "claim_type": "exclusive_write", 00:03:29.643 "zoned": false, 00:03:29.643 "supported_io_types": { 00:03:29.643 "read": true, 00:03:29.643 "write": true, 00:03:29.643 "unmap": true, 00:03:29.643 "flush": true, 00:03:29.643 "reset": true, 00:03:29.643 "nvme_admin": false, 00:03:29.643 "nvme_io": false, 00:03:29.643 "nvme_io_md": false, 00:03:29.643 "write_zeroes": true, 00:03:29.643 "zcopy": true, 00:03:29.643 "get_zone_info": false, 00:03:29.643 "zone_management": false, 00:03:29.643 "zone_append": false, 00:03:29.643 "compare": false, 00:03:29.643 "compare_and_write": false, 00:03:29.643 "abort": true, 00:03:29.643 "seek_hole": false, 00:03:29.643 "seek_data": false, 00:03:29.643 "copy": true, 00:03:29.643 "nvme_iov_md": false 00:03:29.643 }, 00:03:29.643 "memory_domains": [ 00:03:29.643 { 00:03:29.643 "dma_device_id": "system", 00:03:29.643 "dma_device_type": 1 00:03:29.643 }, 00:03:29.643 { 00:03:29.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.643 "dma_device_type": 2 00:03:29.643 } 00:03:29.643 ], 00:03:29.643 "driver_specific": {} 00:03:29.643 }, 00:03:29.643 { 00:03:29.643 "name": "Passthru0", 00:03:29.643 "aliases": [ 00:03:29.643 "fc353bf6-746a-5be5-8360-3e5d9f8ff29b" 00:03:29.643 ], 00:03:29.643 "product_name": "passthru", 00:03:29.643 "block_size": 512, 00:03:29.643 "num_blocks": 16384, 00:03:29.643 "uuid": "fc353bf6-746a-5be5-8360-3e5d9f8ff29b", 00:03:29.643 "assigned_rate_limits": { 00:03:29.643 "rw_ios_per_sec": 0, 00:03:29.643 "rw_mbytes_per_sec": 0, 00:03:29.643 "r_mbytes_per_sec": 0, 00:03:29.643 "w_mbytes_per_sec": 0 00:03:29.643 }, 00:03:29.643 "claimed": false, 00:03:29.643 "zoned": false, 00:03:29.643 "supported_io_types": { 00:03:29.643 "read": true, 00:03:29.643 "write": true, 00:03:29.643 "unmap": true, 00:03:29.643 "flush": true, 00:03:29.643 "reset": true, 00:03:29.643 "nvme_admin": false, 00:03:29.643 "nvme_io": false, 00:03:29.643 "nvme_io_md": false, 00:03:29.643 "write_zeroes": true, 00:03:29.643 "zcopy": true, 00:03:29.643 "get_zone_info": false, 00:03:29.643 "zone_management": false, 00:03:29.643 "zone_append": false, 00:03:29.643 "compare": false, 00:03:29.643 "compare_and_write": false, 00:03:29.643 "abort": true, 00:03:29.643 "seek_hole": false, 00:03:29.643 "seek_data": false, 00:03:29.643 "copy": true, 00:03:29.643 "nvme_iov_md": false 00:03:29.643 }, 00:03:29.643 "memory_domains": [ 00:03:29.643 { 00:03:29.643 "dma_device_id": "system", 00:03:29.643 "dma_device_type": 1 00:03:29.643 }, 00:03:29.643 { 00:03:29.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.643 "dma_device_type": 2 00:03:29.643 } 00:03:29.643 ], 00:03:29.643 "driver_specific": { 00:03:29.643 "passthru": { 00:03:29.643 "name": "Passthru0", 00:03:29.643 "base_bdev_name": "Malloc2" 00:03:29.643 } 00:03:29.643 } 00:03:29.643 } 00:03:29.643 ]' 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:29.902 00:03:29.902 real 0m0.230s 00:03:29.902 user 0m0.158s 00:03:29.902 sys 0m0.019s 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.902 12:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.902 ************************************ 00:03:29.902 END TEST rpc_daemon_integrity 00:03:29.902 ************************************ 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:29.902 12:55:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:29.902 12:55:51 rpc -- rpc/rpc.sh@84 -- # killprocess 3704491 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@948 -- # '[' -z 3704491 ']' 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@952 -- # kill -0 3704491 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@953 -- # uname 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3704491 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3704491' 00:03:29.902 killing process with pid 3704491 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@967 -- # kill 3704491 00:03:29.902 12:55:51 rpc -- common/autotest_common.sh@972 -- # wait 3704491 00:03:30.469 00:03:30.470 real 0m1.946s 00:03:30.470 user 0m2.408s 00:03:30.470 sys 0m0.596s 00:03:30.470 12:55:51 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.470 12:55:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.470 ************************************ 00:03:30.470 END TEST rpc 00:03:30.470 ************************************ 00:03:30.470 12:55:51 -- common/autotest_common.sh@1142 -- # return 0 00:03:30.470 12:55:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:30.470 12:55:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.470 12:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.470 12:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:30.470 ************************************ 00:03:30.470 START TEST skip_rpc 00:03:30.470 ************************************ 00:03:30.470 12:55:51 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:30.470 * Looking for test storage... 00:03:30.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:30.470 12:55:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:30.470 12:55:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:30.470 12:55:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:30.470 12:55:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.470 12:55:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.470 12:55:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.470 ************************************ 00:03:30.470 START TEST skip_rpc 00:03:30.470 ************************************ 00:03:30.470 12:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:30.470 12:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3704930 00:03:30.470 12:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:30.470 12:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.470 12:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:30.470 [2024-07-15 12:55:52.112928] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:30.470 [2024-07-15 12:55:52.113010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704930 ] 00:03:30.470 EAL: No free 2048 kB hugepages reported on node 1 00:03:30.730 [2024-07-15 12:55:52.187739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.730 [2024-07-15 12:55:52.319771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.041 12:55:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3704930 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3704930 ']' 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3704930 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3704930 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3704930' 00:03:36.042 killing process with pid 3704930 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3704930 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3704930 00:03:36.042 00:03:36.042 real 0m5.491s 00:03:36.042 user 0m5.167s 00:03:36.042 sys 0m0.322s 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.042 12:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.042 ************************************ 00:03:36.042 END TEST skip_rpc 00:03:36.042 ************************************ 00:03:36.042 12:55:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:36.042 12:55:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:36.042 12:55:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.042 12:55:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.042 12:55:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.042 ************************************ 00:03:36.042 START TEST skip_rpc_with_json 00:03:36.042 ************************************ 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3705625 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3705625 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3705625 ']' 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:36.042 12:55:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.042 [2024-07-15 12:55:57.650462] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:36.042 [2024-07-15 12:55:57.650549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705625 ] 00:03:36.042 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.042 [2024-07-15 12:55:57.707146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.316 [2024-07-15 12:55:57.816586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.576 [2024-07-15 12:55:58.080986] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:36.576 request: 00:03:36.576 { 00:03:36.576 "trtype": "tcp", 00:03:36.576 "method": "nvmf_get_transports", 00:03:36.576 "req_id": 1 00:03:36.576 } 00:03:36.576 Got JSON-RPC error response 00:03:36.576 response: 00:03:36.576 { 00:03:36.576 "code": -19, 00:03:36.576 "message": "No such device" 00:03:36.576 } 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.576 [2024-07-15 12:55:58.089102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:36.576 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.576 { 00:03:36.576 "subsystems": [ 00:03:36.576 { 00:03:36.576 "subsystem": "vfio_user_target", 00:03:36.576 "config": null 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "subsystem": "keyring", 00:03:36.576 "config": [] 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "subsystem": "iobuf", 00:03:36.576 "config": [ 00:03:36.576 { 00:03:36.576 "method": "iobuf_set_options", 00:03:36.576 "params": { 00:03:36.576 "small_pool_count": 8192, 00:03:36.576 "large_pool_count": 1024, 00:03:36.576 "small_bufsize": 8192, 00:03:36.576 "large_bufsize": 135168 00:03:36.576 } 00:03:36.576 } 00:03:36.576 ] 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "subsystem": "sock", 00:03:36.576 "config": [ 00:03:36.576 { 00:03:36.576 "method": "sock_set_default_impl", 00:03:36.576 "params": { 00:03:36.576 "impl_name": "posix" 00:03:36.576 } 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "method": "sock_impl_set_options", 00:03:36.576 "params": { 00:03:36.576 "impl_name": "ssl", 00:03:36.576 "recv_buf_size": 4096, 00:03:36.576 "send_buf_size": 4096, 00:03:36.576 "enable_recv_pipe": true, 00:03:36.576 "enable_quickack": false, 00:03:36.576 "enable_placement_id": 0, 00:03:36.576 "enable_zerocopy_send_server": true, 00:03:36.576 "enable_zerocopy_send_client": false, 00:03:36.576 "zerocopy_threshold": 0, 00:03:36.576 "tls_version": 0, 00:03:36.576 "enable_ktls": false 00:03:36.576 } 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "method": "sock_impl_set_options", 00:03:36.576 "params": { 00:03:36.576 "impl_name": "posix", 00:03:36.576 "recv_buf_size": 2097152, 00:03:36.576 "send_buf_size": 2097152, 00:03:36.576 "enable_recv_pipe": true, 00:03:36.576 "enable_quickack": false, 00:03:36.576 "enable_placement_id": 0, 00:03:36.576 "enable_zerocopy_send_server": true, 00:03:36.576 "enable_zerocopy_send_client": false, 00:03:36.576 "zerocopy_threshold": 0, 00:03:36.576 "tls_version": 0, 00:03:36.576 "enable_ktls": false 00:03:36.576 } 00:03:36.576 } 00:03:36.576 ] 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "subsystem": "vmd", 00:03:36.576 "config": [] 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "subsystem": "accel", 00:03:36.576 "config": [ 00:03:36.576 { 00:03:36.576 "method": "accel_set_options", 00:03:36.576 "params": { 00:03:36.576 "small_cache_size": 128, 00:03:36.576 "large_cache_size": 16, 00:03:36.576 "task_count": 2048, 00:03:36.576 "sequence_count": 2048, 00:03:36.576 "buf_count": 2048 00:03:36.576 } 00:03:36.576 } 00:03:36.576 ] 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "subsystem": "bdev", 00:03:36.576 "config": [ 00:03:36.576 { 00:03:36.576 "method": "bdev_set_options", 00:03:36.576 "params": { 00:03:36.576 "bdev_io_pool_size": 65535, 00:03:36.576 "bdev_io_cache_size": 256, 00:03:36.576 "bdev_auto_examine": true, 00:03:36.576 "iobuf_small_cache_size": 128, 00:03:36.576 "iobuf_large_cache_size": 16 00:03:36.576 } 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "method": "bdev_raid_set_options", 00:03:36.576 "params": { 00:03:36.576 "process_window_size_kb": 1024 00:03:36.576 } 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "method": "bdev_iscsi_set_options", 00:03:36.576 "params": { 00:03:36.576 "timeout_sec": 30 00:03:36.576 } 00:03:36.576 }, 00:03:36.576 { 00:03:36.576 "method": "bdev_nvme_set_options", 00:03:36.576 "params": { 00:03:36.577 "action_on_timeout": "none", 00:03:36.577 "timeout_us": 0, 00:03:36.577 "timeout_admin_us": 0, 00:03:36.577 "keep_alive_timeout_ms": 10000, 00:03:36.577 "arbitration_burst": 0, 00:03:36.577 "low_priority_weight": 0, 00:03:36.577 "medium_priority_weight": 0, 00:03:36.577 "high_priority_weight": 0, 00:03:36.577 "nvme_adminq_poll_period_us": 10000, 00:03:36.577 "nvme_ioq_poll_period_us": 0, 00:03:36.577 "io_queue_requests": 0, 00:03:36.577 "delay_cmd_submit": true, 00:03:36.577 "transport_retry_count": 4, 00:03:36.577 "bdev_retry_count": 3, 00:03:36.577 "transport_ack_timeout": 0, 00:03:36.577 "ctrlr_loss_timeout_sec": 0, 00:03:36.577 "reconnect_delay_sec": 0, 00:03:36.577 "fast_io_fail_timeout_sec": 0, 00:03:36.577 "disable_auto_failback": false, 00:03:36.577 "generate_uuids": false, 00:03:36.577 "transport_tos": 0, 00:03:36.577 "nvme_error_stat": false, 00:03:36.577 "rdma_srq_size": 0, 00:03:36.577 "io_path_stat": false, 00:03:36.577 "allow_accel_sequence": false, 00:03:36.577 "rdma_max_cq_size": 0, 00:03:36.577 "rdma_cm_event_timeout_ms": 0, 00:03:36.577 "dhchap_digests": [ 00:03:36.577 "sha256", 00:03:36.577 "sha384", 00:03:36.577 "sha512" 00:03:36.577 ], 00:03:36.577 "dhchap_dhgroups": [ 00:03:36.577 "null", 00:03:36.577 "ffdhe2048", 00:03:36.577 "ffdhe3072", 00:03:36.577 "ffdhe4096", 00:03:36.577 "ffdhe6144", 00:03:36.577 "ffdhe8192" 00:03:36.577 ] 00:03:36.577 } 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "method": "bdev_nvme_set_hotplug", 00:03:36.577 "params": { 00:03:36.577 "period_us": 100000, 00:03:36.577 "enable": false 00:03:36.577 } 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "method": "bdev_wait_for_examine" 00:03:36.577 } 00:03:36.577 ] 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "scsi", 00:03:36.577 "config": null 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "scheduler", 00:03:36.577 "config": [ 00:03:36.577 { 00:03:36.577 "method": "framework_set_scheduler", 00:03:36.577 "params": { 00:03:36.577 "name": "static" 00:03:36.577 } 00:03:36.577 } 00:03:36.577 ] 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "vhost_scsi", 00:03:36.577 "config": [] 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "vhost_blk", 00:03:36.577 "config": [] 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "ublk", 00:03:36.577 "config": [] 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "nbd", 00:03:36.577 "config": [] 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "nvmf", 00:03:36.577 "config": [ 00:03:36.577 { 00:03:36.577 "method": "nvmf_set_config", 00:03:36.577 "params": { 00:03:36.577 "discovery_filter": "match_any", 00:03:36.577 "admin_cmd_passthru": { 00:03:36.577 "identify_ctrlr": false 00:03:36.577 } 00:03:36.577 } 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "method": "nvmf_set_max_subsystems", 00:03:36.577 "params": { 00:03:36.577 "max_subsystems": 1024 00:03:36.577 } 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "method": "nvmf_set_crdt", 00:03:36.577 "params": { 00:03:36.577 "crdt1": 0, 00:03:36.577 "crdt2": 0, 00:03:36.577 "crdt3": 0 00:03:36.577 } 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "method": "nvmf_create_transport", 00:03:36.577 "params": { 00:03:36.577 "trtype": "TCP", 00:03:36.577 "max_queue_depth": 128, 00:03:36.577 "max_io_qpairs_per_ctrlr": 127, 00:03:36.577 "in_capsule_data_size": 4096, 00:03:36.577 "max_io_size": 131072, 00:03:36.577 "io_unit_size": 131072, 00:03:36.577 "max_aq_depth": 128, 00:03:36.577 "num_shared_buffers": 511, 00:03:36.577 "buf_cache_size": 4294967295, 00:03:36.577 "dif_insert_or_strip": false, 00:03:36.577 "zcopy": false, 00:03:36.577 "c2h_success": true, 00:03:36.577 "sock_priority": 0, 00:03:36.577 "abort_timeout_sec": 1, 00:03:36.577 "ack_timeout": 0, 00:03:36.577 "data_wr_pool_size": 0 00:03:36.577 } 00:03:36.577 } 00:03:36.577 ] 00:03:36.577 }, 00:03:36.577 { 00:03:36.577 "subsystem": "iscsi", 00:03:36.577 "config": [ 00:03:36.577 { 00:03:36.577 "method": "iscsi_set_options", 00:03:36.577 "params": { 00:03:36.577 "node_base": "iqn.2016-06.io.spdk", 00:03:36.577 "max_sessions": 128, 00:03:36.577 "max_connections_per_session": 2, 00:03:36.577 "max_queue_depth": 64, 00:03:36.577 "default_time2wait": 2, 00:03:36.577 "default_time2retain": 20, 00:03:36.577 "first_burst_length": 8192, 00:03:36.577 "immediate_data": true, 00:03:36.577 "allow_duplicated_isid": false, 00:03:36.577 "error_recovery_level": 0, 00:03:36.577 "nop_timeout": 60, 00:03:36.577 "nop_in_interval": 30, 00:03:36.577 "disable_chap": false, 00:03:36.577 "require_chap": false, 00:03:36.577 "mutual_chap": false, 00:03:36.577 "chap_group": 0, 00:03:36.577 "max_large_datain_per_connection": 64, 00:03:36.577 "max_r2t_per_connection": 4, 00:03:36.577 "pdu_pool_size": 36864, 00:03:36.577 "immediate_data_pool_size": 16384, 00:03:36.577 "data_out_pool_size": 2048 00:03:36.577 } 00:03:36.577 } 00:03:36.577 ] 00:03:36.577 } 00:03:36.577 ] 00:03:36.577 } 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3705625 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3705625 ']' 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3705625 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3705625 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3705625' 00:03:36.577 killing process with pid 3705625 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3705625 00:03:36.577 12:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3705625 00:03:37.147 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3705765 00:03:37.147 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.147 12:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3705765 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3705765 ']' 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3705765 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3705765 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3705765' 00:03:42.426 killing process with pid 3705765 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3705765 00:03:42.426 12:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3705765 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:42.685 00:03:42.685 real 0m6.630s 00:03:42.685 user 0m6.208s 00:03:42.685 sys 0m0.693s 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.685 ************************************ 00:03:42.685 END TEST skip_rpc_with_json 00:03:42.685 ************************************ 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.685 12:56:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.685 ************************************ 00:03:42.685 START TEST skip_rpc_with_delay 00:03:42.685 ************************************ 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:42.685 [2024-07-15 12:56:04.332896] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:42.685 [2024-07-15 12:56:04.333023] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:42.685 00:03:42.685 real 0m0.070s 00:03:42.685 user 0m0.045s 00:03:42.685 sys 0m0.025s 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.685 12:56:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:42.685 ************************************ 00:03:42.685 END TEST skip_rpc_with_delay 00:03:42.685 ************************************ 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.685 12:56:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:42.685 12:56:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:42.685 12:56:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.685 12:56:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.942 ************************************ 00:03:42.942 START TEST exit_on_failed_rpc_init 00:03:42.942 ************************************ 00:03:42.942 12:56:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:42.942 12:56:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3706483 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3706483 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3706483 ']' 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:42.943 12:56:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:42.943 [2024-07-15 12:56:04.453575] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:42.943 [2024-07-15 12:56:04.453674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706483 ] 00:03:42.943 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.943 [2024-07-15 12:56:04.516033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.943 [2024-07-15 12:56:04.631609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:43.880 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.880 [2024-07-15 12:56:05.435830] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:43.880 [2024-07-15 12:56:05.435931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706524 ] 00:03:43.880 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.880 [2024-07-15 12:56:05.499481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.138 [2024-07-15 12:56:05.619116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:44.138 [2024-07-15 12:56:05.619250] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:44.138 [2024-07-15 12:56:05.619278] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:44.138 [2024-07-15 12:56:05.619292] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3706483 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3706483 ']' 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3706483 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3706483 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3706483' 00:03:44.138 killing process with pid 3706483 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3706483 00:03:44.138 12:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3706483 00:03:44.707 00:03:44.707 real 0m1.837s 00:03:44.707 user 0m2.202s 00:03:44.707 sys 0m0.485s 00:03:44.707 12:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.707 12:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:44.707 ************************************ 00:03:44.707 END TEST exit_on_failed_rpc_init 00:03:44.707 ************************************ 00:03:44.707 12:56:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:44.707 12:56:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:44.707 00:03:44.707 real 0m14.278s 00:03:44.707 user 0m13.717s 00:03:44.707 sys 0m1.696s 00:03:44.707 12:56:06 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.707 12:56:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.707 ************************************ 00:03:44.707 END TEST skip_rpc 00:03:44.707 ************************************ 00:03:44.707 12:56:06 -- common/autotest_common.sh@1142 -- # return 0 00:03:44.707 12:56:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:44.707 12:56:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.707 12:56:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.707 12:56:06 -- common/autotest_common.sh@10 -- # set +x 00:03:44.707 ************************************ 00:03:44.707 START TEST rpc_client 00:03:44.707 ************************************ 00:03:44.707 12:56:06 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:44.707 * Looking for test storage... 00:03:44.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:44.707 12:56:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:44.707 OK 00:03:44.708 12:56:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:44.708 00:03:44.708 real 0m0.069s 00:03:44.708 user 0m0.029s 00:03:44.708 sys 0m0.045s 00:03:44.708 12:56:06 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.708 12:56:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:44.708 ************************************ 00:03:44.708 END TEST rpc_client 00:03:44.708 ************************************ 00:03:44.708 12:56:06 -- common/autotest_common.sh@1142 -- # return 0 00:03:44.708 12:56:06 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.708 12:56:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.708 12:56:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.708 12:56:06 -- common/autotest_common.sh@10 -- # set +x 00:03:44.968 ************************************ 00:03:44.968 START TEST json_config 00:03:44.968 ************************************ 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.968 12:56:06 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.968 12:56:06 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.968 12:56:06 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.968 12:56:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.968 12:56:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.968 12:56:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.968 12:56:06 json_config -- paths/export.sh@5 -- # export PATH 00:03:44.968 12:56:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@47 -- # : 0 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:44.968 12:56:06 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:44.968 INFO: JSON configuration test init 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.968 12:56:06 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:44.968 12:56:06 json_config -- json_config/common.sh@9 -- # local app=target 00:03:44.968 12:56:06 json_config -- json_config/common.sh@10 -- # shift 00:03:44.968 12:56:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:44.968 12:56:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:44.968 12:56:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:44.968 12:56:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.968 12:56:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.968 12:56:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3706766 00:03:44.968 12:56:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:44.968 12:56:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:44.968 Waiting for target to run... 00:03:44.968 12:56:06 json_config -- json_config/common.sh@25 -- # waitforlisten 3706766 /var/tmp/spdk_tgt.sock 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@829 -- # '[' -z 3706766 ']' 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:44.968 12:56:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.968 [2024-07-15 12:56:06.536199] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:44.968 [2024-07-15 12:56:06.536311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706766 ] 00:03:44.968 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.227 [2024-07-15 12:56:06.896426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.486 [2024-07-15 12:56:06.985254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.054 12:56:07 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:46.054 12:56:07 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:46.054 12:56:07 json_config -- json_config/common.sh@26 -- # echo '' 00:03:46.054 00:03:46.054 12:56:07 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:46.054 12:56:07 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:46.054 12:56:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:46.054 12:56:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.054 12:56:07 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:46.054 12:56:07 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:46.054 12:56:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:46.054 12:56:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.054 12:56:07 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:46.054 12:56:07 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:46.054 12:56:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:49.336 12:56:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.336 12:56:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:49.336 12:56:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:49.336 12:56:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:49.336 12:56:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:49.336 12:56:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.336 12:56:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:49.336 12:56:10 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:49.336 12:56:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:49.594 MallocForNvmf0 00:03:49.594 12:56:11 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:49.594 12:56:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:49.850 MallocForNvmf1 00:03:49.850 12:56:11 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:49.850 12:56:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:50.107 [2024-07-15 12:56:11.691381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:50.107 12:56:11 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:50.107 12:56:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:50.365 12:56:11 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:50.365 12:56:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:50.622 12:56:12 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:50.622 12:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:50.880 12:56:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:50.880 12:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:51.137 [2024-07-15 12:56:12.670750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:51.137 12:56:12 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:51.137 12:56:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:51.137 12:56:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.137 12:56:12 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:51.137 12:56:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:51.137 12:56:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.137 12:56:12 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:51.137 12:56:12 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:51.137 12:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:51.394 MallocBdevForConfigChangeCheck 00:03:51.394 12:56:12 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:51.394 12:56:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:51.394 12:56:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.394 12:56:12 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:51.394 12:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:51.970 12:56:13 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:51.970 INFO: shutting down applications... 00:03:51.970 12:56:13 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:51.970 12:56:13 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:51.970 12:56:13 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:51.970 12:56:13 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:53.378 Calling clear_iscsi_subsystem 00:03:53.378 Calling clear_nvmf_subsystem 00:03:53.378 Calling clear_nbd_subsystem 00:03:53.378 Calling clear_ublk_subsystem 00:03:53.378 Calling clear_vhost_blk_subsystem 00:03:53.378 Calling clear_vhost_scsi_subsystem 00:03:53.378 Calling clear_bdev_subsystem 00:03:53.378 12:56:15 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:53.378 12:56:15 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:53.378 12:56:15 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:53.378 12:56:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.378 12:56:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:53.378 12:56:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:53.947 12:56:15 json_config -- json_config/json_config.sh@345 -- # break 00:03:53.947 12:56:15 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:53.948 12:56:15 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:53.948 12:56:15 json_config -- json_config/common.sh@31 -- # local app=target 00:03:53.948 12:56:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:53.948 12:56:15 json_config -- json_config/common.sh@35 -- # [[ -n 3706766 ]] 00:03:53.948 12:56:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3706766 00:03:53.948 12:56:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:53.948 12:56:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.948 12:56:15 json_config -- json_config/common.sh@41 -- # kill -0 3706766 00:03:53.948 12:56:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:54.250 12:56:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:54.250 12:56:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.250 12:56:15 json_config -- json_config/common.sh@41 -- # kill -0 3706766 00:03:54.250 12:56:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:54.250 12:56:15 json_config -- json_config/common.sh@43 -- # break 00:03:54.250 12:56:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:54.250 12:56:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:54.250 SPDK target shutdown done 00:03:54.250 12:56:15 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:54.250 INFO: relaunching applications... 00:03:54.250 12:56:15 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.250 12:56:15 json_config -- json_config/common.sh@9 -- # local app=target 00:03:54.250 12:56:15 json_config -- json_config/common.sh@10 -- # shift 00:03:54.250 12:56:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:54.250 12:56:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:54.250 12:56:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:54.250 12:56:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.250 12:56:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.250 12:56:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3708060 00:03:54.250 12:56:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.250 12:56:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:54.250 Waiting for target to run... 00:03:54.250 12:56:15 json_config -- json_config/common.sh@25 -- # waitforlisten 3708060 /var/tmp/spdk_tgt.sock 00:03:54.250 12:56:15 json_config -- common/autotest_common.sh@829 -- # '[' -z 3708060 ']' 00:03:54.250 12:56:15 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.250 12:56:15 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:54.250 12:56:15 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.250 12:56:15 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:54.250 12:56:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.509 [2024-07-15 12:56:15.954941] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:03:54.509 [2024-07-15 12:56:15.955047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708060 ] 00:03:54.509 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.074 [2024-07-15 12:56:16.492512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.075 [2024-07-15 12:56:16.596129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.362 [2024-07-15 12:56:19.640556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.362 [2024-07-15 12:56:19.673044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:58.930 12:56:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:58.930 12:56:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:58.930 12:56:20 json_config -- json_config/common.sh@26 -- # echo '' 00:03:58.930 00:03:58.930 12:56:20 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:58.930 12:56:20 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:58.930 INFO: Checking if target configuration is the same... 00:03:58.930 12:56:20 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.930 12:56:20 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:58.930 12:56:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.930 + '[' 2 -ne 2 ']' 00:03:58.930 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:58.930 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:58.930 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.930 +++ basename /dev/fd/62 00:03:58.930 ++ mktemp /tmp/62.XXX 00:03:58.930 + tmp_file_1=/tmp/62.xpN 00:03:58.930 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.930 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.930 + tmp_file_2=/tmp/spdk_tgt_config.json.ako 00:03:58.930 + ret=0 00:03:58.930 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.188 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.188 + diff -u /tmp/62.xpN /tmp/spdk_tgt_config.json.ako 00:03:59.188 + echo 'INFO: JSON config files are the same' 00:03:59.188 INFO: JSON config files are the same 00:03:59.188 + rm /tmp/62.xpN /tmp/spdk_tgt_config.json.ako 00:03:59.188 + exit 0 00:03:59.188 12:56:20 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:59.188 12:56:20 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:59.188 INFO: changing configuration and checking if this can be detected... 00:03:59.188 12:56:20 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:59.188 12:56:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:59.445 12:56:21 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.445 12:56:21 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:59.445 12:56:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:59.445 + '[' 2 -ne 2 ']' 00:03:59.445 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:59.445 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:59.445 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:59.445 +++ basename /dev/fd/62 00:03:59.445 ++ mktemp /tmp/62.XXX 00:03:59.445 + tmp_file_1=/tmp/62.TDn 00:03:59.445 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.445 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:59.445 + tmp_file_2=/tmp/spdk_tgt_config.json.AuQ 00:03:59.445 + ret=0 00:03:59.445 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.015 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.015 + diff -u /tmp/62.TDn /tmp/spdk_tgt_config.json.AuQ 00:04:00.015 + ret=1 00:04:00.015 + echo '=== Start of file: /tmp/62.TDn ===' 00:04:00.015 + cat /tmp/62.TDn 00:04:00.015 + echo '=== End of file: /tmp/62.TDn ===' 00:04:00.015 + echo '' 00:04:00.015 + echo '=== Start of file: /tmp/spdk_tgt_config.json.AuQ ===' 00:04:00.015 + cat /tmp/spdk_tgt_config.json.AuQ 00:04:00.015 + echo '=== End of file: /tmp/spdk_tgt_config.json.AuQ ===' 00:04:00.015 + echo '' 00:04:00.015 + rm /tmp/62.TDn /tmp/spdk_tgt_config.json.AuQ 00:04:00.015 + exit 1 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:00.015 INFO: configuration change detected. 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@317 -- # [[ -n 3708060 ]] 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.015 12:56:21 json_config -- json_config/json_config.sh@323 -- # killprocess 3708060 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@948 -- # '[' -z 3708060 ']' 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@952 -- # kill -0 3708060 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@953 -- # uname 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3708060 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3708060' 00:04:00.015 killing process with pid 3708060 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@967 -- # kill 3708060 00:04:00.015 12:56:21 json_config -- common/autotest_common.sh@972 -- # wait 3708060 00:04:01.922 12:56:23 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.922 12:56:23 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:01.922 12:56:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.922 12:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.922 12:56:23 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:01.922 12:56:23 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:01.922 INFO: Success 00:04:01.922 00:04:01.922 real 0m16.804s 00:04:01.922 user 0m18.775s 00:04:01.922 sys 0m2.086s 00:04:01.922 12:56:23 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.922 12:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.922 ************************************ 00:04:01.922 END TEST json_config 00:04:01.922 ************************************ 00:04:01.922 12:56:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.922 12:56:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.922 12:56:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.922 12:56:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.922 12:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:01.922 ************************************ 00:04:01.922 START TEST json_config_extra_key 00:04:01.922 ************************************ 00:04:01.922 12:56:23 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.922 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:01.922 12:56:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:01.922 12:56:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.922 12:56:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.922 12:56:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.922 12:56:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.922 12:56:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.922 12:56:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:01.922 12:56:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:01.922 12:56:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:01.922 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:01.922 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:01.922 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:01.922 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:01.922 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:01.923 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:01.923 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:01.923 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:01.923 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:01.923 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:01.923 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:01.923 INFO: launching applications... 00:04:01.923 12:56:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3709021 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.923 Waiting for target to run... 00:04:01.923 12:56:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3709021 /var/tmp/spdk_tgt.sock 00:04:01.923 12:56:23 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3709021 ']' 00:04:01.923 12:56:23 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.923 12:56:23 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.923 12:56:23 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.923 12:56:23 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.923 12:56:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:01.923 [2024-07-15 12:56:23.388250] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:01.923 [2024-07-15 12:56:23.388359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709021 ] 00:04:01.923 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.490 [2024-07-15 12:56:23.894833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.490 [2024-07-15 12:56:24.002362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.748 12:56:24 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:02.748 12:56:24 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:02.748 00:04:02.748 12:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:02.748 INFO: shutting down applications... 00:04:02.748 12:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3709021 ]] 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3709021 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3709021 00:04:02.748 12:56:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:03.314 12:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:03.314 12:56:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.314 12:56:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3709021 00:04:03.314 12:56:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:03.314 12:56:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:03.314 12:56:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:03.314 12:56:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:03.314 SPDK target shutdown done 00:04:03.314 12:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:03.314 Success 00:04:03.314 00:04:03.314 real 0m1.590s 00:04:03.314 user 0m1.471s 00:04:03.314 sys 0m0.600s 00:04:03.314 12:56:24 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.314 12:56:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:03.314 ************************************ 00:04:03.314 END TEST json_config_extra_key 00:04:03.314 ************************************ 00:04:03.314 12:56:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:03.314 12:56:24 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:03.314 12:56:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.314 12:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.314 12:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:03.314 ************************************ 00:04:03.314 START TEST alias_rpc 00:04:03.314 ************************************ 00:04:03.314 12:56:24 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:03.314 * Looking for test storage... 00:04:03.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:03.314 12:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:03.314 12:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3709289 00:04:03.314 12:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.314 12:56:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3709289 00:04:03.314 12:56:24 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3709289 ']' 00:04:03.314 12:56:24 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.314 12:56:24 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:03.314 12:56:24 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.314 12:56:24 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:03.314 12:56:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.574 [2024-07-15 12:56:25.025116] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:03.574 [2024-07-15 12:56:25.025222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709289 ] 00:04:03.574 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.574 [2024-07-15 12:56:25.081347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.574 [2024-07-15 12:56:25.186618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.832 12:56:25 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.832 12:56:25 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:03.832 12:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:04.091 12:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3709289 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3709289 ']' 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3709289 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3709289 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3709289' 00:04:04.091 killing process with pid 3709289 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@967 -- # kill 3709289 00:04:04.091 12:56:25 alias_rpc -- common/autotest_common.sh@972 -- # wait 3709289 00:04:04.658 00:04:04.658 real 0m1.293s 00:04:04.658 user 0m1.367s 00:04:04.658 sys 0m0.424s 00:04:04.658 12:56:26 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.658 12:56:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.658 ************************************ 00:04:04.658 END TEST alias_rpc 00:04:04.658 ************************************ 00:04:04.658 12:56:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.658 12:56:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:04.658 12:56:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:04.658 12:56:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.658 12:56:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.658 12:56:26 -- common/autotest_common.sh@10 -- # set +x 00:04:04.658 ************************************ 00:04:04.658 START TEST spdkcli_tcp 00:04:04.658 ************************************ 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:04.658 * Looking for test storage... 00:04:04.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3709475 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:04.658 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3709475 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3709475 ']' 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.658 12:56:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.917 [2024-07-15 12:56:26.371840] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:04.917 [2024-07-15 12:56:26.371940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709475 ] 00:04:04.917 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.917 [2024-07-15 12:56:26.428413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:04.917 [2024-07-15 12:56:26.535290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.917 [2024-07-15 12:56:26.535294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.176 12:56:26 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.176 12:56:26 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:05.176 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3709600 00:04:05.176 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:05.176 12:56:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:05.435 [ 00:04:05.435 "bdev_malloc_delete", 00:04:05.435 "bdev_malloc_create", 00:04:05.435 "bdev_null_resize", 00:04:05.435 "bdev_null_delete", 00:04:05.435 "bdev_null_create", 00:04:05.435 "bdev_nvme_cuse_unregister", 00:04:05.435 "bdev_nvme_cuse_register", 00:04:05.435 "bdev_opal_new_user", 00:04:05.435 "bdev_opal_set_lock_state", 00:04:05.435 "bdev_opal_delete", 00:04:05.435 "bdev_opal_get_info", 00:04:05.435 "bdev_opal_create", 00:04:05.435 "bdev_nvme_opal_revert", 00:04:05.435 "bdev_nvme_opal_init", 00:04:05.435 "bdev_nvme_send_cmd", 00:04:05.435 "bdev_nvme_get_path_iostat", 00:04:05.435 "bdev_nvme_get_mdns_discovery_info", 00:04:05.435 "bdev_nvme_stop_mdns_discovery", 00:04:05.435 "bdev_nvme_start_mdns_discovery", 00:04:05.435 "bdev_nvme_set_multipath_policy", 00:04:05.435 "bdev_nvme_set_preferred_path", 00:04:05.435 "bdev_nvme_get_io_paths", 00:04:05.435 "bdev_nvme_remove_error_injection", 00:04:05.435 "bdev_nvme_add_error_injection", 00:04:05.435 "bdev_nvme_get_discovery_info", 00:04:05.435 "bdev_nvme_stop_discovery", 00:04:05.435 "bdev_nvme_start_discovery", 00:04:05.435 "bdev_nvme_get_controller_health_info", 00:04:05.435 "bdev_nvme_disable_controller", 00:04:05.435 "bdev_nvme_enable_controller", 00:04:05.435 "bdev_nvme_reset_controller", 00:04:05.435 "bdev_nvme_get_transport_statistics", 00:04:05.435 "bdev_nvme_apply_firmware", 00:04:05.436 "bdev_nvme_detach_controller", 00:04:05.436 "bdev_nvme_get_controllers", 00:04:05.436 "bdev_nvme_attach_controller", 00:04:05.436 "bdev_nvme_set_hotplug", 00:04:05.436 "bdev_nvme_set_options", 00:04:05.436 "bdev_passthru_delete", 00:04:05.436 "bdev_passthru_create", 00:04:05.436 "bdev_lvol_set_parent_bdev", 00:04:05.436 "bdev_lvol_set_parent", 00:04:05.436 "bdev_lvol_check_shallow_copy", 00:04:05.436 "bdev_lvol_start_shallow_copy", 00:04:05.436 "bdev_lvol_grow_lvstore", 00:04:05.436 "bdev_lvol_get_lvols", 00:04:05.436 "bdev_lvol_get_lvstores", 00:04:05.436 "bdev_lvol_delete", 00:04:05.436 "bdev_lvol_set_read_only", 00:04:05.436 "bdev_lvol_resize", 00:04:05.436 "bdev_lvol_decouple_parent", 00:04:05.436 "bdev_lvol_inflate", 00:04:05.436 "bdev_lvol_rename", 00:04:05.436 "bdev_lvol_clone_bdev", 00:04:05.436 "bdev_lvol_clone", 00:04:05.436 "bdev_lvol_snapshot", 00:04:05.436 "bdev_lvol_create", 00:04:05.436 "bdev_lvol_delete_lvstore", 00:04:05.436 "bdev_lvol_rename_lvstore", 00:04:05.436 "bdev_lvol_create_lvstore", 00:04:05.436 "bdev_raid_set_options", 00:04:05.436 "bdev_raid_remove_base_bdev", 00:04:05.436 "bdev_raid_add_base_bdev", 00:04:05.436 "bdev_raid_delete", 00:04:05.436 "bdev_raid_create", 00:04:05.436 "bdev_raid_get_bdevs", 00:04:05.436 "bdev_error_inject_error", 00:04:05.436 "bdev_error_delete", 00:04:05.436 "bdev_error_create", 00:04:05.436 "bdev_split_delete", 00:04:05.436 "bdev_split_create", 00:04:05.436 "bdev_delay_delete", 00:04:05.436 "bdev_delay_create", 00:04:05.436 "bdev_delay_update_latency", 00:04:05.436 "bdev_zone_block_delete", 00:04:05.436 "bdev_zone_block_create", 00:04:05.436 "blobfs_create", 00:04:05.436 "blobfs_detect", 00:04:05.436 "blobfs_set_cache_size", 00:04:05.436 "bdev_aio_delete", 00:04:05.436 "bdev_aio_rescan", 00:04:05.436 "bdev_aio_create", 00:04:05.436 "bdev_ftl_set_property", 00:04:05.436 "bdev_ftl_get_properties", 00:04:05.436 "bdev_ftl_get_stats", 00:04:05.436 "bdev_ftl_unmap", 00:04:05.436 "bdev_ftl_unload", 00:04:05.436 "bdev_ftl_delete", 00:04:05.436 "bdev_ftl_load", 00:04:05.436 "bdev_ftl_create", 00:04:05.436 "bdev_virtio_attach_controller", 00:04:05.436 "bdev_virtio_scsi_get_devices", 00:04:05.436 "bdev_virtio_detach_controller", 00:04:05.436 "bdev_virtio_blk_set_hotplug", 00:04:05.436 "bdev_iscsi_delete", 00:04:05.436 "bdev_iscsi_create", 00:04:05.436 "bdev_iscsi_set_options", 00:04:05.436 "accel_error_inject_error", 00:04:05.436 "ioat_scan_accel_module", 00:04:05.436 "dsa_scan_accel_module", 00:04:05.436 "iaa_scan_accel_module", 00:04:05.436 "vfu_virtio_create_scsi_endpoint", 00:04:05.436 "vfu_virtio_scsi_remove_target", 00:04:05.436 "vfu_virtio_scsi_add_target", 00:04:05.436 "vfu_virtio_create_blk_endpoint", 00:04:05.436 "vfu_virtio_delete_endpoint", 00:04:05.436 "keyring_file_remove_key", 00:04:05.436 "keyring_file_add_key", 00:04:05.436 "keyring_linux_set_options", 00:04:05.436 "iscsi_get_histogram", 00:04:05.436 "iscsi_enable_histogram", 00:04:05.436 "iscsi_set_options", 00:04:05.436 "iscsi_get_auth_groups", 00:04:05.436 "iscsi_auth_group_remove_secret", 00:04:05.436 "iscsi_auth_group_add_secret", 00:04:05.436 "iscsi_delete_auth_group", 00:04:05.436 "iscsi_create_auth_group", 00:04:05.436 "iscsi_set_discovery_auth", 00:04:05.436 "iscsi_get_options", 00:04:05.436 "iscsi_target_node_request_logout", 00:04:05.436 "iscsi_target_node_set_redirect", 00:04:05.436 "iscsi_target_node_set_auth", 00:04:05.436 "iscsi_target_node_add_lun", 00:04:05.436 "iscsi_get_stats", 00:04:05.436 "iscsi_get_connections", 00:04:05.436 "iscsi_portal_group_set_auth", 00:04:05.436 "iscsi_start_portal_group", 00:04:05.436 "iscsi_delete_portal_group", 00:04:05.436 "iscsi_create_portal_group", 00:04:05.436 "iscsi_get_portal_groups", 00:04:05.436 "iscsi_delete_target_node", 00:04:05.436 "iscsi_target_node_remove_pg_ig_maps", 00:04:05.436 "iscsi_target_node_add_pg_ig_maps", 00:04:05.436 "iscsi_create_target_node", 00:04:05.436 "iscsi_get_target_nodes", 00:04:05.436 "iscsi_delete_initiator_group", 00:04:05.436 "iscsi_initiator_group_remove_initiators", 00:04:05.436 "iscsi_initiator_group_add_initiators", 00:04:05.436 "iscsi_create_initiator_group", 00:04:05.436 "iscsi_get_initiator_groups", 00:04:05.436 "nvmf_set_crdt", 00:04:05.436 "nvmf_set_config", 00:04:05.436 "nvmf_set_max_subsystems", 00:04:05.436 "nvmf_stop_mdns_prr", 00:04:05.436 "nvmf_publish_mdns_prr", 00:04:05.436 "nvmf_subsystem_get_listeners", 00:04:05.436 "nvmf_subsystem_get_qpairs", 00:04:05.436 "nvmf_subsystem_get_controllers", 00:04:05.436 "nvmf_get_stats", 00:04:05.436 "nvmf_get_transports", 00:04:05.436 "nvmf_create_transport", 00:04:05.436 "nvmf_get_targets", 00:04:05.436 "nvmf_delete_target", 00:04:05.436 "nvmf_create_target", 00:04:05.436 "nvmf_subsystem_allow_any_host", 00:04:05.436 "nvmf_subsystem_remove_host", 00:04:05.436 "nvmf_subsystem_add_host", 00:04:05.436 "nvmf_ns_remove_host", 00:04:05.436 "nvmf_ns_add_host", 00:04:05.436 "nvmf_subsystem_remove_ns", 00:04:05.436 "nvmf_subsystem_add_ns", 00:04:05.436 "nvmf_subsystem_listener_set_ana_state", 00:04:05.436 "nvmf_discovery_get_referrals", 00:04:05.436 "nvmf_discovery_remove_referral", 00:04:05.436 "nvmf_discovery_add_referral", 00:04:05.436 "nvmf_subsystem_remove_listener", 00:04:05.436 "nvmf_subsystem_add_listener", 00:04:05.436 "nvmf_delete_subsystem", 00:04:05.436 "nvmf_create_subsystem", 00:04:05.436 "nvmf_get_subsystems", 00:04:05.436 "env_dpdk_get_mem_stats", 00:04:05.436 "nbd_get_disks", 00:04:05.436 "nbd_stop_disk", 00:04:05.436 "nbd_start_disk", 00:04:05.436 "ublk_recover_disk", 00:04:05.436 "ublk_get_disks", 00:04:05.436 "ublk_stop_disk", 00:04:05.436 "ublk_start_disk", 00:04:05.436 "ublk_destroy_target", 00:04:05.436 "ublk_create_target", 00:04:05.436 "virtio_blk_create_transport", 00:04:05.436 "virtio_blk_get_transports", 00:04:05.436 "vhost_controller_set_coalescing", 00:04:05.436 "vhost_get_controllers", 00:04:05.436 "vhost_delete_controller", 00:04:05.436 "vhost_create_blk_controller", 00:04:05.436 "vhost_scsi_controller_remove_target", 00:04:05.436 "vhost_scsi_controller_add_target", 00:04:05.436 "vhost_start_scsi_controller", 00:04:05.436 "vhost_create_scsi_controller", 00:04:05.436 "thread_set_cpumask", 00:04:05.436 "framework_get_governor", 00:04:05.436 "framework_get_scheduler", 00:04:05.436 "framework_set_scheduler", 00:04:05.436 "framework_get_reactors", 00:04:05.436 "thread_get_io_channels", 00:04:05.436 "thread_get_pollers", 00:04:05.436 "thread_get_stats", 00:04:05.436 "framework_monitor_context_switch", 00:04:05.436 "spdk_kill_instance", 00:04:05.436 "log_enable_timestamps", 00:04:05.436 "log_get_flags", 00:04:05.436 "log_clear_flag", 00:04:05.436 "log_set_flag", 00:04:05.436 "log_get_level", 00:04:05.436 "log_set_level", 00:04:05.436 "log_get_print_level", 00:04:05.436 "log_set_print_level", 00:04:05.436 "framework_enable_cpumask_locks", 00:04:05.436 "framework_disable_cpumask_locks", 00:04:05.436 "framework_wait_init", 00:04:05.436 "framework_start_init", 00:04:05.436 "scsi_get_devices", 00:04:05.436 "bdev_get_histogram", 00:04:05.436 "bdev_enable_histogram", 00:04:05.436 "bdev_set_qos_limit", 00:04:05.436 "bdev_set_qd_sampling_period", 00:04:05.436 "bdev_get_bdevs", 00:04:05.436 "bdev_reset_iostat", 00:04:05.436 "bdev_get_iostat", 00:04:05.436 "bdev_examine", 00:04:05.436 "bdev_wait_for_examine", 00:04:05.436 "bdev_set_options", 00:04:05.436 "notify_get_notifications", 00:04:05.436 "notify_get_types", 00:04:05.436 "accel_get_stats", 00:04:05.436 "accel_set_options", 00:04:05.436 "accel_set_driver", 00:04:05.436 "accel_crypto_key_destroy", 00:04:05.436 "accel_crypto_keys_get", 00:04:05.436 "accel_crypto_key_create", 00:04:05.436 "accel_assign_opc", 00:04:05.436 "accel_get_module_info", 00:04:05.436 "accel_get_opc_assignments", 00:04:05.436 "vmd_rescan", 00:04:05.436 "vmd_remove_device", 00:04:05.436 "vmd_enable", 00:04:05.436 "sock_get_default_impl", 00:04:05.436 "sock_set_default_impl", 00:04:05.436 "sock_impl_set_options", 00:04:05.436 "sock_impl_get_options", 00:04:05.436 "iobuf_get_stats", 00:04:05.436 "iobuf_set_options", 00:04:05.436 "keyring_get_keys", 00:04:05.436 "framework_get_pci_devices", 00:04:05.436 "framework_get_config", 00:04:05.436 "framework_get_subsystems", 00:04:05.436 "vfu_tgt_set_base_path", 00:04:05.436 "trace_get_info", 00:04:05.436 "trace_get_tpoint_group_mask", 00:04:05.436 "trace_disable_tpoint_group", 00:04:05.436 "trace_enable_tpoint_group", 00:04:05.436 "trace_clear_tpoint_mask", 00:04:05.436 "trace_set_tpoint_mask", 00:04:05.436 "spdk_get_version", 00:04:05.436 "rpc_get_methods" 00:04:05.436 ] 00:04:05.436 12:56:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.436 12:56:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:05.436 12:56:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3709475 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3709475 ']' 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3709475 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3709475 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3709475' 00:04:05.436 killing process with pid 3709475 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3709475 00:04:05.436 12:56:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3709475 00:04:06.019 00:04:06.019 real 0m1.308s 00:04:06.019 user 0m2.267s 00:04:06.019 sys 0m0.468s 00:04:06.019 12:56:27 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.019 12:56:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:06.019 ************************************ 00:04:06.019 END TEST spdkcli_tcp 00:04:06.019 ************************************ 00:04:06.019 12:56:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.019 12:56:27 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:06.019 12:56:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.019 12:56:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.019 12:56:27 -- common/autotest_common.sh@10 -- # set +x 00:04:06.019 ************************************ 00:04:06.019 START TEST dpdk_mem_utility 00:04:06.019 ************************************ 00:04:06.019 12:56:27 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:06.019 * Looking for test storage... 00:04:06.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:06.019 12:56:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:06.019 12:56:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3709683 00:04:06.019 12:56:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.019 12:56:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3709683 00:04:06.019 12:56:27 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3709683 ']' 00:04:06.019 12:56:27 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.019 12:56:27 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:06.019 12:56:27 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.019 12:56:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:06.019 12:56:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.278 [2024-07-15 12:56:27.725505] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:06.278 [2024-07-15 12:56:27.725597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709683 ] 00:04:06.278 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.278 [2024-07-15 12:56:27.782788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.278 [2024-07-15 12:56:27.892270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.536 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.536 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:06.536 12:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:06.536 12:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:06.536 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.536 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.536 { 00:04:06.536 "filename": "/tmp/spdk_mem_dump.txt" 00:04:06.536 } 00:04:06.536 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.536 12:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:06.536 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:06.536 1 heaps totaling size 814.000000 MiB 00:04:06.536 size: 814.000000 MiB heap id: 0 00:04:06.536 end heaps---------- 00:04:06.536 8 mempools totaling size 598.116089 MiB 00:04:06.536 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:06.536 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:06.536 size: 84.521057 MiB name: bdev_io_3709683 00:04:06.536 size: 51.011292 MiB name: evtpool_3709683 00:04:06.536 size: 50.003479 MiB name: msgpool_3709683 00:04:06.536 size: 21.763794 MiB name: PDU_Pool 00:04:06.536 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:06.536 size: 0.026123 MiB name: Session_Pool 00:04:06.536 end mempools------- 00:04:06.536 6 memzones totaling size 4.142822 MiB 00:04:06.536 size: 1.000366 MiB name: RG_ring_0_3709683 00:04:06.536 size: 1.000366 MiB name: RG_ring_1_3709683 00:04:06.536 size: 1.000366 MiB name: RG_ring_4_3709683 00:04:06.536 size: 1.000366 MiB name: RG_ring_5_3709683 00:04:06.536 size: 0.125366 MiB name: RG_ring_2_3709683 00:04:06.536 size: 0.015991 MiB name: RG_ring_3_3709683 00:04:06.536 end memzones------- 00:04:06.536 12:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:06.794 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:06.794 list of free elements. size: 12.519348 MiB 00:04:06.794 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:06.794 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:06.794 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:06.794 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:06.794 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:06.794 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:06.794 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:06.794 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:06.794 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:06.794 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:06.794 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:06.794 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:06.794 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:06.794 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:06.794 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:06.794 list of standard malloc elements. size: 199.218079 MiB 00:04:06.794 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:06.794 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:06.794 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:06.794 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:06.794 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:06.794 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:06.794 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:06.794 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:06.794 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:06.794 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:06.794 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:06.794 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:06.794 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:06.794 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:06.794 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:06.794 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:06.794 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:06.794 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:06.794 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:06.794 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:06.794 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:06.794 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:06.794 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:06.795 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:06.795 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:06.795 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:06.795 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:06.795 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:06.795 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:06.795 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:06.795 list of memzone associated elements. size: 602.262573 MiB 00:04:06.795 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:06.795 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:06.795 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:06.795 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:06.795 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:06.795 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3709683_0 00:04:06.795 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:06.795 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3709683_0 00:04:06.795 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:06.795 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3709683_0 00:04:06.795 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:06.795 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:06.795 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:06.795 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:06.795 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:06.795 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3709683 00:04:06.795 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:06.795 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3709683 00:04:06.795 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:06.795 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3709683 00:04:06.795 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:06.795 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:06.795 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:06.795 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:06.795 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:06.795 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:06.795 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:06.795 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:06.795 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:06.795 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3709683 00:04:06.795 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:06.795 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3709683 00:04:06.795 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:06.795 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3709683 00:04:06.795 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:06.795 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3709683 00:04:06.795 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:06.795 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3709683 00:04:06.795 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:06.795 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:06.795 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:06.795 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:06.795 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:06.795 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:06.795 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:06.795 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3709683 00:04:06.795 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:06.795 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:06.795 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:06.795 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:06.795 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:06.795 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3709683 00:04:06.795 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:06.795 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:06.795 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:06.795 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3709683 00:04:06.795 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:06.795 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3709683 00:04:06.795 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:06.795 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:06.795 12:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:06.795 12:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3709683 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3709683 ']' 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3709683 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3709683 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3709683' 00:04:06.795 killing process with pid 3709683 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3709683 00:04:06.795 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3709683 00:04:07.362 00:04:07.362 real 0m1.157s 00:04:07.362 user 0m1.127s 00:04:07.362 sys 0m0.410s 00:04:07.362 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.362 12:56:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:07.362 ************************************ 00:04:07.362 END TEST dpdk_mem_utility 00:04:07.362 ************************************ 00:04:07.362 12:56:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:07.362 12:56:28 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:07.362 12:56:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.362 12:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.362 12:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:07.362 ************************************ 00:04:07.362 START TEST event 00:04:07.362 ************************************ 00:04:07.362 12:56:28 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:07.362 * Looking for test storage... 00:04:07.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:07.362 12:56:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:07.362 12:56:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:07.362 12:56:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:07.362 12:56:28 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:07.362 12:56:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.362 12:56:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.362 ************************************ 00:04:07.362 START TEST event_perf 00:04:07.362 ************************************ 00:04:07.362 12:56:28 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:07.362 Running I/O for 1 seconds...[2024-07-15 12:56:28.917240] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:07.362 [2024-07-15 12:56:28.917309] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709892 ] 00:04:07.362 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.362 [2024-07-15 12:56:28.978524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.619 [2024-07-15 12:56:29.092819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.619 [2024-07-15 12:56:29.092889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.619 [2024-07-15 12:56:29.092946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.619 [2024-07-15 12:56:29.092950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.610 Running I/O for 1 seconds... 00:04:08.610 lcore 0: 231744 00:04:08.610 lcore 1: 231744 00:04:08.610 lcore 2: 231743 00:04:08.610 lcore 3: 231743 00:04:08.610 done. 00:04:08.610 00:04:08.610 real 0m1.314s 00:04:08.610 user 0m4.218s 00:04:08.610 sys 0m0.091s 00:04:08.610 12:56:30 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.610 12:56:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:08.610 ************************************ 00:04:08.610 END TEST event_perf 00:04:08.610 ************************************ 00:04:08.610 12:56:30 event -- common/autotest_common.sh@1142 -- # return 0 00:04:08.610 12:56:30 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:08.610 12:56:30 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:08.610 12:56:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.610 12:56:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:08.610 ************************************ 00:04:08.610 START TEST event_reactor 00:04:08.610 ************************************ 00:04:08.610 12:56:30 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:08.610 [2024-07-15 12:56:30.277190] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:08.610 [2024-07-15 12:56:30.277255] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710149 ] 00:04:08.610 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.868 [2024-07-15 12:56:30.342604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.868 [2024-07-15 12:56:30.458816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.242 test_start 00:04:10.242 oneshot 00:04:10.242 tick 100 00:04:10.242 tick 100 00:04:10.242 tick 250 00:04:10.242 tick 100 00:04:10.242 tick 100 00:04:10.242 tick 250 00:04:10.242 tick 100 00:04:10.242 tick 500 00:04:10.242 tick 100 00:04:10.242 tick 100 00:04:10.242 tick 250 00:04:10.242 tick 100 00:04:10.242 tick 100 00:04:10.242 test_end 00:04:10.242 00:04:10.242 real 0m1.314s 00:04:10.242 user 0m1.221s 00:04:10.242 sys 0m0.087s 00:04:10.242 12:56:31 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.242 12:56:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:10.242 ************************************ 00:04:10.242 END TEST event_reactor 00:04:10.242 ************************************ 00:04:10.242 12:56:31 event -- common/autotest_common.sh@1142 -- # return 0 00:04:10.242 12:56:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.242 12:56:31 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:10.242 12:56:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.242 12:56:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.242 ************************************ 00:04:10.242 START TEST event_reactor_perf 00:04:10.242 ************************************ 00:04:10.242 12:56:31 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.242 [2024-07-15 12:56:31.635417] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:10.242 [2024-07-15 12:56:31.635485] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710313 ] 00:04:10.242 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.242 [2024-07-15 12:56:31.698904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.242 [2024-07-15 12:56:31.817044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.616 test_start 00:04:11.616 test_end 00:04:11.616 Performance: 357240 events per second 00:04:11.616 00:04:11.616 real 0m1.320s 00:04:11.616 user 0m1.235s 00:04:11.616 sys 0m0.080s 00:04:11.616 12:56:32 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.616 12:56:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 ************************************ 00:04:11.616 END TEST event_reactor_perf 00:04:11.616 ************************************ 00:04:11.616 12:56:32 event -- common/autotest_common.sh@1142 -- # return 0 00:04:11.616 12:56:32 event -- event/event.sh@49 -- # uname -s 00:04:11.616 12:56:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:11.616 12:56:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.616 12:56:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.616 12:56:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.616 12:56:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 ************************************ 00:04:11.616 START TEST event_scheduler 00:04:11.616 ************************************ 00:04:11.616 12:56:32 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.616 * Looking for test storage... 00:04:11.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:11.616 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:11.616 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3710491 00:04:11.616 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:11.616 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.616 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3710491 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3710491 ']' 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 [2024-07-15 12:56:33.083372] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:11.616 [2024-07-15 12:56:33.083461] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710491 ] 00:04:11.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.616 [2024-07-15 12:56:33.142641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.616 [2024-07-15 12:56:33.251115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.616 [2024-07-15 12:56:33.251193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.616 [2024-07-15 12:56:33.251172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.616 [2024-07-15 12:56:33.251197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:11.616 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.616 [2024-07-15 12:56:33.292012] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:11.616 [2024-07-15 12:56:33.292041] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:11.616 [2024-07-15 12:56:33.292058] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:11.616 [2024-07-15 12:56:33.292069] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:11.616 [2024-07-15 12:56:33.292080] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.616 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.616 12:56:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 [2024-07-15 12:56:33.390251] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:11.875 12:56:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:11.875 12:56:33 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.875 12:56:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 ************************************ 00:04:11.875 START TEST scheduler_create_thread 00:04:11.875 ************************************ 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 2 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 3 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 4 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 5 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 6 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 7 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 8 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 9 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 10 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.875 12:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.441 12:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.441 00:04:12.441 real 0m0.591s 00:04:12.441 user 0m0.010s 00:04:12.441 sys 0m0.003s 00:04:12.441 12:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.441 12:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.441 ************************************ 00:04:12.441 END TEST scheduler_create_thread 00:04:12.441 ************************************ 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:12.441 12:56:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:12.441 12:56:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3710491 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3710491 ']' 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3710491 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3710491 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3710491' 00:04:12.441 killing process with pid 3710491 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3710491 00:04:12.441 12:56:34 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3710491 00:04:13.007 [2024-07-15 12:56:34.486332] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:13.265 00:04:13.265 real 0m1.750s 00:04:13.265 user 0m2.175s 00:04:13.265 sys 0m0.325s 00:04:13.265 12:56:34 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.265 12:56:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.265 ************************************ 00:04:13.265 END TEST event_scheduler 00:04:13.265 ************************************ 00:04:13.265 12:56:34 event -- common/autotest_common.sh@1142 -- # return 0 00:04:13.265 12:56:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:13.265 12:56:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:13.265 12:56:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.265 12:56:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.265 12:56:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.265 ************************************ 00:04:13.265 START TEST app_repeat 00:04:13.265 ************************************ 00:04:13.265 12:56:34 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3710801 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3710801' 00:04:13.265 Process app_repeat pid: 3710801 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:13.265 spdk_app_start Round 0 00:04:13.265 12:56:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3710801 /var/tmp/spdk-nbd.sock 00:04:13.265 12:56:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3710801 ']' 00:04:13.265 12:56:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.265 12:56:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.265 12:56:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.266 12:56:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.266 12:56:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.266 [2024-07-15 12:56:34.812335] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:13.266 [2024-07-15 12:56:34.812392] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710801 ] 00:04:13.266 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.266 [2024-07-15 12:56:34.870151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.524 [2024-07-15 12:56:34.981599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.524 [2024-07-15 12:56:34.981603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.524 12:56:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.524 12:56:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:13.524 12:56:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.782 Malloc0 00:04:13.782 12:56:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.040 Malloc1 00:04:14.040 12:56:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.040 12:56:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:14.298 /dev/nbd0 00:04:14.298 12:56:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:14.298 12:56:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.298 1+0 records in 00:04:14.298 1+0 records out 00:04:14.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240866 s, 17.0 MB/s 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:14.298 12:56:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:14.298 12:56:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.298 12:56:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.299 12:56:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:14.556 /dev/nbd1 00:04:14.557 12:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:14.557 12:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.557 1+0 records in 00:04:14.557 1+0 records out 00:04:14.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177678 s, 23.1 MB/s 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:14.557 12:56:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:14.557 12:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.557 12:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.557 12:56:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.557 12:56:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.557 12:56:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:14.815 { 00:04:14.815 "nbd_device": "/dev/nbd0", 00:04:14.815 "bdev_name": "Malloc0" 00:04:14.815 }, 00:04:14.815 { 00:04:14.815 "nbd_device": "/dev/nbd1", 00:04:14.815 "bdev_name": "Malloc1" 00:04:14.815 } 00:04:14.815 ]' 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:14.815 { 00:04:14.815 "nbd_device": "/dev/nbd0", 00:04:14.815 "bdev_name": "Malloc0" 00:04:14.815 }, 00:04:14.815 { 00:04:14.815 "nbd_device": "/dev/nbd1", 00:04:14.815 "bdev_name": "Malloc1" 00:04:14.815 } 00:04:14.815 ]' 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:14.815 /dev/nbd1' 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:14.815 /dev/nbd1' 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:14.815 256+0 records in 00:04:14.815 256+0 records out 00:04:14.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506214 s, 207 MB/s 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.815 12:56:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.073 256+0 records in 00:04:15.073 256+0 records out 00:04:15.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242157 s, 43.3 MB/s 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.073 256+0 records in 00:04:15.073 256+0 records out 00:04:15.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022133 s, 47.4 MB/s 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.073 12:56:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.331 12:56:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.589 12:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:15.847 12:56:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:15.847 12:56:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:16.106 12:56:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:16.364 [2024-07-15 12:56:37.936974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:16.364 [2024-07-15 12:56:38.056053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.364 [2024-07-15 12:56:38.056053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.623 [2024-07-15 12:56:38.116237] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:16.623 [2024-07-15 12:56:38.116308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:19.151 12:56:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:19.151 12:56:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:19.151 spdk_app_start Round 1 00:04:19.151 12:56:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3710801 /var/tmp/spdk-nbd.sock 00:04:19.151 12:56:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3710801 ']' 00:04:19.151 12:56:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:19.151 12:56:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.151 12:56:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:19.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:19.151 12:56:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.151 12:56:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:19.409 12:56:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.409 12:56:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:19.409 12:56:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.667 Malloc0 00:04:19.667 12:56:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.926 Malloc1 00:04:19.926 12:56:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.926 12:56:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:20.184 /dev/nbd0 00:04:20.184 12:56:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:20.184 12:56:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.184 1+0 records in 00:04:20.184 1+0 records out 00:04:20.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170468 s, 24.0 MB/s 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:20.184 12:56:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:20.184 12:56:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.184 12:56:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.184 12:56:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.442 /dev/nbd1 00:04:20.442 12:56:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:20.442 12:56:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.442 1+0 records in 00:04:20.442 1+0 records out 00:04:20.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016562 s, 24.7 MB/s 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:20.442 12:56:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:20.442 12:56:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.442 12:56:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.442 12:56:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.442 12:56:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.442 12:56:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.700 12:56:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.700 { 00:04:20.700 "nbd_device": "/dev/nbd0", 00:04:20.700 "bdev_name": "Malloc0" 00:04:20.700 }, 00:04:20.700 { 00:04:20.700 "nbd_device": "/dev/nbd1", 00:04:20.700 "bdev_name": "Malloc1" 00:04:20.700 } 00:04:20.700 ]' 00:04:20.700 12:56:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.700 { 00:04:20.700 "nbd_device": "/dev/nbd0", 00:04:20.700 "bdev_name": "Malloc0" 00:04:20.700 }, 00:04:20.700 { 00:04:20.700 "nbd_device": "/dev/nbd1", 00:04:20.700 "bdev_name": "Malloc1" 00:04:20.700 } 00:04:20.700 ]' 00:04:20.700 12:56:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.700 12:56:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.700 /dev/nbd1' 00:04:20.700 12:56:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.700 /dev/nbd1' 00:04:20.700 12:56:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.701 256+0 records in 00:04:20.701 256+0 records out 00:04:20.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502711 s, 209 MB/s 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.701 256+0 records in 00:04:20.701 256+0 records out 00:04:20.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205687 s, 51.0 MB/s 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.701 256+0 records in 00:04:20.701 256+0 records out 00:04:20.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241598 s, 43.4 MB/s 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.701 12:56:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.960 12:56:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.218 12:56:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.475 12:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:21.475 12:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:21.475 12:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:21.733 12:56:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:21.733 12:56:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.990 12:56:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:22.250 [2024-07-15 12:56:43.722060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.250 [2024-07-15 12:56:43.835907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.250 [2024-07-15 12:56:43.835914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.250 [2024-07-15 12:56:43.898432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:22.250 [2024-07-15 12:56:43.898537] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.810 12:56:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.810 12:56:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:24.810 spdk_app_start Round 2 00:04:24.810 12:56:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3710801 /var/tmp/spdk-nbd.sock 00:04:24.810 12:56:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3710801 ']' 00:04:24.810 12:56:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.810 12:56:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.810 12:56:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.810 12:56:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.810 12:56:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:25.068 12:56:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.068 12:56:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:25.068 12:56:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.326 Malloc0 00:04:25.326 12:56:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.585 Malloc1 00:04:25.585 12:56:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.585 12:56:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.843 /dev/nbd0 00:04:25.843 12:56:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.843 12:56:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.843 1+0 records in 00:04:25.843 1+0 records out 00:04:25.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200402 s, 20.4 MB/s 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.843 12:56:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:25.843 12:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.843 12:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.843 12:56:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.101 /dev/nbd1 00:04:26.101 12:56:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.101 12:56:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.101 1+0 records in 00:04:26.101 1+0 records out 00:04:26.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181823 s, 22.5 MB/s 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:26.101 12:56:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:26.101 12:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.101 12:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.101 12:56:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.101 12:56:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.101 12:56:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.359 12:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.359 { 00:04:26.359 "nbd_device": "/dev/nbd0", 00:04:26.359 "bdev_name": "Malloc0" 00:04:26.359 }, 00:04:26.359 { 00:04:26.359 "nbd_device": "/dev/nbd1", 00:04:26.359 "bdev_name": "Malloc1" 00:04:26.359 } 00:04:26.359 ]' 00:04:26.359 12:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.359 { 00:04:26.359 "nbd_device": "/dev/nbd0", 00:04:26.359 "bdev_name": "Malloc0" 00:04:26.359 }, 00:04:26.359 { 00:04:26.359 "nbd_device": "/dev/nbd1", 00:04:26.359 "bdev_name": "Malloc1" 00:04:26.359 } 00:04:26.359 ]' 00:04:26.359 12:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.359 12:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.359 /dev/nbd1' 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.618 /dev/nbd1' 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.618 256+0 records in 00:04:26.618 256+0 records out 00:04:26.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509206 s, 206 MB/s 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.618 256+0 records in 00:04:26.618 256+0 records out 00:04:26.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245896 s, 42.6 MB/s 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.618 256+0 records in 00:04:26.618 256+0 records out 00:04:26.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234984 s, 44.6 MB/s 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.618 12:56:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.876 12:56:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.134 12:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.392 12:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.392 12:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.392 12:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.392 12:56:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.392 12:56:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.650 12:56:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.908 [2024-07-15 12:56:49.546200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.165 [2024-07-15 12:56:49.661512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.165 [2024-07-15 12:56:49.661512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.165 [2024-07-15 12:56:49.723814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.165 [2024-07-15 12:56:49.723947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.688 12:56:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3710801 /var/tmp/spdk-nbd.sock 00:04:30.688 12:56:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3710801 ']' 00:04:30.688 12:56:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.688 12:56:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.688 12:56:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.688 12:56:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.688 12:56:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:30.944 12:56:52 event.app_repeat -- event/event.sh@39 -- # killprocess 3710801 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3710801 ']' 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3710801 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3710801 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3710801' 00:04:30.944 killing process with pid 3710801 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3710801 00:04:30.944 12:56:52 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3710801 00:04:31.201 spdk_app_start is called in Round 0. 00:04:31.201 Shutdown signal received, stop current app iteration 00:04:31.201 Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 reinitialization... 00:04:31.201 spdk_app_start is called in Round 1. 00:04:31.201 Shutdown signal received, stop current app iteration 00:04:31.201 Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 reinitialization... 00:04:31.201 spdk_app_start is called in Round 2. 00:04:31.201 Shutdown signal received, stop current app iteration 00:04:31.201 Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 reinitialization... 00:04:31.201 spdk_app_start is called in Round 3. 00:04:31.201 Shutdown signal received, stop current app iteration 00:04:31.201 12:56:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:31.201 12:56:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:31.201 00:04:31.201 real 0m18.011s 00:04:31.201 user 0m38.893s 00:04:31.201 sys 0m3.279s 00:04:31.201 12:56:52 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.201 12:56:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.201 ************************************ 00:04:31.201 END TEST app_repeat 00:04:31.201 ************************************ 00:04:31.201 12:56:52 event -- common/autotest_common.sh@1142 -- # return 0 00:04:31.201 12:56:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:31.201 12:56:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:31.201 12:56:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.201 12:56:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.201 12:56:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.201 ************************************ 00:04:31.201 START TEST cpu_locks 00:04:31.201 ************************************ 00:04:31.201 12:56:52 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:31.201 * Looking for test storage... 00:04:31.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:31.201 12:56:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:31.201 12:56:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:31.201 12:56:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:31.459 12:56:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:31.459 12:56:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.459 12:56:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.459 12:56:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.459 ************************************ 00:04:31.459 START TEST default_locks 00:04:31.459 ************************************ 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3713154 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3713154 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3713154 ']' 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.459 12:56:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.459 [2024-07-15 12:56:52.976848] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:31.459 [2024-07-15 12:56:52.976959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713154 ] 00:04:31.459 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.459 [2024-07-15 12:56:53.034101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.459 [2024-07-15 12:56:53.142420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.724 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.724 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:31.724 12:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3713154 00:04:31.724 12:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3713154 00:04:31.724 12:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.291 lslocks: write error 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3713154 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3713154 ']' 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3713154 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3713154 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3713154' 00:04:32.291 killing process with pid 3713154 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3713154 00:04:32.291 12:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3713154 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3713154 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3713154 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3713154 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3713154 ']' 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3713154) - No such process 00:04:32.857 ERROR: process (pid: 3713154) is no longer running 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:32.857 00:04:32.857 real 0m1.378s 00:04:32.857 user 0m1.302s 00:04:32.857 sys 0m0.568s 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.857 12:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.857 ************************************ 00:04:32.857 END TEST default_locks 00:04:32.857 ************************************ 00:04:32.857 12:56:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:32.857 12:56:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:32.857 12:56:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.857 12:56:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.857 12:56:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.857 ************************************ 00:04:32.857 START TEST default_locks_via_rpc 00:04:32.857 ************************************ 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3713318 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3713318 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3713318 ']' 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.857 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.857 [2024-07-15 12:56:54.406407] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:32.857 [2024-07-15 12:56:54.406501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713318 ] 00:04:32.857 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.857 [2024-07-15 12:56:54.468388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.115 [2024-07-15 12:56:54.584536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3713318 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3713318 00:04:33.373 12:56:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3713318 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3713318 ']' 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3713318 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3713318 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3713318' 00:04:33.630 killing process with pid 3713318 00:04:33.630 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3713318 00:04:33.631 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3713318 00:04:34.196 00:04:34.196 real 0m1.242s 00:04:34.196 user 0m1.177s 00:04:34.196 sys 0m0.520s 00:04:34.196 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.196 12:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.196 ************************************ 00:04:34.196 END TEST default_locks_via_rpc 00:04:34.196 ************************************ 00:04:34.196 12:56:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:34.196 12:56:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:34.196 12:56:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.196 12:56:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.196 12:56:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.196 ************************************ 00:04:34.196 START TEST non_locking_app_on_locked_coremask 00:04:34.196 ************************************ 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3713482 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3713482 /var/tmp/spdk.sock 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3713482 ']' 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.196 12:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.196 [2024-07-15 12:56:55.697977] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:34.196 [2024-07-15 12:56:55.698055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713482 ] 00:04:34.196 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.196 [2024-07-15 12:56:55.755265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.196 [2024-07-15 12:56:55.864609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.454 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.454 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:34.454 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3713611 00:04:34.454 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:34.454 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3713611 /var/tmp/spdk2.sock 00:04:34.454 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3713611 ']' 00:04:34.454 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.455 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.455 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.455 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.455 12:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.714 [2024-07-15 12:56:56.178750] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:34.714 [2024-07-15 12:56:56.178823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713611 ] 00:04:34.714 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.714 [2024-07-15 12:56:56.270629] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.714 [2024-07-15 12:56:56.270662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.973 [2024-07-15 12:56:56.515693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.540 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.540 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:35.540 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3713482 00:04:35.540 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.540 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3713482 00:04:36.105 lslocks: write error 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3713482 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3713482 ']' 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3713482 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3713482 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3713482' 00:04:36.105 killing process with pid 3713482 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3713482 00:04:36.105 12:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3713482 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3713611 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3713611 ']' 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3713611 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3713611 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3713611' 00:04:37.038 killing process with pid 3713611 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3713611 00:04:37.038 12:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3713611 00:04:37.611 00:04:37.611 real 0m3.368s 00:04:37.611 user 0m3.505s 00:04:37.611 sys 0m1.032s 00:04:37.611 12:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.611 12:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 ************************************ 00:04:37.611 END TEST non_locking_app_on_locked_coremask 00:04:37.611 ************************************ 00:04:37.611 12:56:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:37.611 12:56:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:37.611 12:56:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.611 12:56:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.611 12:56:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 ************************************ 00:04:37.611 START TEST locking_app_on_unlocked_coremask 00:04:37.611 ************************************ 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3713922 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3713922 /var/tmp/spdk.sock 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3713922 ']' 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.611 12:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 [2024-07-15 12:56:59.119652] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:37.611 [2024-07-15 12:56:59.119754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713922 ] 00:04:37.611 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.611 [2024-07-15 12:56:59.181995] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.611 [2024-07-15 12:56:59.182039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.611 [2024-07-15 12:56:59.297330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3714071 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3714071 /var/tmp/spdk2.sock 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3714071 ']' 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.546 12:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.546 [2024-07-15 12:57:00.104422] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:38.546 [2024-07-15 12:57:00.104518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714071 ] 00:04:38.546 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.546 [2024-07-15 12:57:00.195508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.804 [2024-07-15 12:57:00.421526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.371 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.371 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:39.371 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3714071 00:04:39.371 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3714071 00:04:39.371 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.937 lslocks: write error 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3713922 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3713922 ']' 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3713922 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3713922 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3713922' 00:04:39.937 killing process with pid 3713922 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3713922 00:04:39.937 12:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3713922 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3714071 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3714071 ']' 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3714071 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3714071 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3714071' 00:04:40.870 killing process with pid 3714071 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3714071 00:04:40.870 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3714071 00:04:41.436 00:04:41.436 real 0m3.780s 00:04:41.436 user 0m4.158s 00:04:41.436 sys 0m1.059s 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.436 ************************************ 00:04:41.436 END TEST locking_app_on_unlocked_coremask 00:04:41.436 ************************************ 00:04:41.436 12:57:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:41.436 12:57:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:41.436 12:57:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.436 12:57:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.436 12:57:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.436 ************************************ 00:04:41.436 START TEST locking_app_on_locked_coremask 00:04:41.436 ************************************ 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3714592 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3714592 /var/tmp/spdk.sock 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3714592 ']' 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.436 12:57:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.436 [2024-07-15 12:57:02.951479] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:41.436 [2024-07-15 12:57:02.951563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714592 ] 00:04:41.436 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.436 [2024-07-15 12:57:03.014239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.436 [2024-07-15 12:57:03.135445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3714604 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3714604 /var/tmp/spdk2.sock 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3714604 /var/tmp/spdk2.sock 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3714604 /var/tmp/spdk2.sock 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3714604 ']' 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.708 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.975 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.975 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.975 12:57:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.975 [2024-07-15 12:57:03.449921] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:41.975 [2024-07-15 12:57:03.449996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714604 ] 00:04:41.975 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.975 [2024-07-15 12:57:03.542964] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3714592 has claimed it. 00:04:41.975 [2024-07-15 12:57:03.543023] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3714604) - No such process 00:04:42.540 ERROR: process (pid: 3714604) is no longer running 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3714592 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3714592 00:04:42.540 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.106 lslocks: write error 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3714592 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3714592 ']' 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3714592 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3714592 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3714592' 00:04:43.106 killing process with pid 3714592 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3714592 00:04:43.106 12:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3714592 00:04:43.364 00:04:43.364 real 0m2.151s 00:04:43.364 user 0m2.318s 00:04:43.364 sys 0m0.684s 00:04:43.364 12:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.364 12:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.364 ************************************ 00:04:43.364 END TEST locking_app_on_locked_coremask 00:04:43.364 ************************************ 00:04:43.622 12:57:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:43.622 12:57:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:43.622 12:57:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.622 12:57:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.622 12:57:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 ************************************ 00:04:43.622 START TEST locking_overlapped_coremask 00:04:43.622 ************************************ 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3714896 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3714896 /var/tmp/spdk.sock 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3714896 ']' 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.622 12:57:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.622 [2024-07-15 12:57:05.146155] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:43.622 [2024-07-15 12:57:05.146267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714896 ] 00:04:43.622 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.622 [2024-07-15 12:57:05.208388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.880 [2024-07-15 12:57:05.325794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.880 [2024-07-15 12:57:05.325854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.880 [2024-07-15 12:57:05.325858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.445 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.445 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:44.445 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3714926 00:04:44.445 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3714926 /var/tmp/spdk2.sock 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3714926 /var/tmp/spdk2.sock 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3714926 /var/tmp/spdk2.sock 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3714926 ']' 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.446 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.446 [2024-07-15 12:57:06.129493] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:44.446 [2024-07-15 12:57:06.129589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714926 ] 00:04:44.703 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.703 [2024-07-15 12:57:06.223741] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3714896 has claimed it. 00:04:44.703 [2024-07-15 12:57:06.223799] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3714926) - No such process 00:04:45.268 ERROR: process (pid: 3714926) is no longer running 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3714896 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3714896 ']' 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3714896 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3714896 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3714896' 00:04:45.268 killing process with pid 3714896 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3714896 00:04:45.268 12:57:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3714896 00:04:45.833 00:04:45.833 real 0m2.225s 00:04:45.833 user 0m6.225s 00:04:45.833 sys 0m0.509s 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.833 ************************************ 00:04:45.833 END TEST locking_overlapped_coremask 00:04:45.833 ************************************ 00:04:45.833 12:57:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:45.833 12:57:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:45.833 12:57:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.833 12:57:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.833 12:57:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.833 ************************************ 00:04:45.833 START TEST locking_overlapped_coremask_via_rpc 00:04:45.833 ************************************ 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3715352 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3715352 /var/tmp/spdk.sock 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3715352 ']' 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.833 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.833 [2024-07-15 12:57:07.413571] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:45.833 [2024-07-15 12:57:07.413669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715352 ] 00:04:45.833 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.833 [2024-07-15 12:57:07.472939] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.833 [2024-07-15 12:57:07.472984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.091 [2024-07-15 12:57:07.585179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.091 [2024-07-15 12:57:07.585238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.091 [2024-07-15 12:57:07.585241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3715452 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3715452 /var/tmp/spdk2.sock 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3715452 ']' 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.348 12:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.348 [2024-07-15 12:57:07.900408] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:46.348 [2024-07-15 12:57:07.900498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715452 ] 00:04:46.348 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.348 [2024-07-15 12:57:07.993290] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.348 [2024-07-15 12:57:07.993332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.606 [2024-07-15 12:57:08.216962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.606 [2024-07-15 12:57:08.217021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:46.606 [2024-07-15 12:57:08.217023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.170 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.170 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:47.170 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:47.170 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.170 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.427 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.427 [2024-07-15 12:57:08.875978] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3715352 has claimed it. 00:04:47.427 request: 00:04:47.428 { 00:04:47.428 "method": "framework_enable_cpumask_locks", 00:04:47.428 "req_id": 1 00:04:47.428 } 00:04:47.428 Got JSON-RPC error response 00:04:47.428 response: 00:04:47.428 { 00:04:47.428 "code": -32603, 00:04:47.428 "message": "Failed to claim CPU core: 2" 00:04:47.428 } 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3715352 /var/tmp/spdk.sock 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3715352 ']' 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.428 12:57:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3715452 /var/tmp/spdk2.sock 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3715452 ']' 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.684 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:47.941 00:04:47.941 real 0m2.038s 00:04:47.941 user 0m1.061s 00:04:47.941 sys 0m0.182s 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.941 12:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.941 ************************************ 00:04:47.941 END TEST locking_overlapped_coremask_via_rpc 00:04:47.941 ************************************ 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:47.941 12:57:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:47.941 12:57:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3715352 ]] 00:04:47.941 12:57:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3715352 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3715352 ']' 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3715352 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3715352 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3715352' 00:04:47.941 killing process with pid 3715352 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3715352 00:04:47.941 12:57:09 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3715352 00:04:48.198 12:57:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3715452 ]] 00:04:48.198 12:57:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3715452 00:04:48.198 12:57:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3715452 ']' 00:04:48.198 12:57:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3715452 00:04:48.198 12:57:09 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:48.198 12:57:09 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.198 12:57:09 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3715452 00:04:48.455 12:57:09 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:48.455 12:57:09 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:48.455 12:57:09 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3715452' 00:04:48.455 killing process with pid 3715452 00:04:48.455 12:57:09 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3715452 00:04:48.455 12:57:09 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3715452 00:04:48.713 12:57:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:48.713 12:57:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:48.713 12:57:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3715352 ]] 00:04:48.713 12:57:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3715352 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3715352 ']' 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3715352 00:04:48.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3715352) - No such process 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3715352 is not found' 00:04:48.713 Process with pid 3715352 is not found 00:04:48.713 12:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3715452 ]] 00:04:48.713 12:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3715452 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3715452 ']' 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3715452 00:04:48.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3715452) - No such process 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3715452 is not found' 00:04:48.713 Process with pid 3715452 is not found 00:04:48.713 12:57:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:48.713 00:04:48.713 real 0m17.526s 00:04:48.713 user 0m30.865s 00:04:48.713 sys 0m5.447s 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.713 12:57:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.713 ************************************ 00:04:48.713 END TEST cpu_locks 00:04:48.713 ************************************ 00:04:48.713 12:57:10 event -- common/autotest_common.sh@1142 -- # return 0 00:04:48.713 00:04:48.713 real 0m41.575s 00:04:48.713 user 1m18.748s 00:04:48.713 sys 0m9.528s 00:04:48.713 12:57:10 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.713 12:57:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.713 ************************************ 00:04:48.713 END TEST event 00:04:48.713 ************************************ 00:04:48.969 12:57:10 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.969 12:57:10 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:48.969 12:57:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.969 12:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.969 12:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:48.970 ************************************ 00:04:48.970 START TEST thread 00:04:48.970 ************************************ 00:04:48.970 12:57:10 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:48.970 * Looking for test storage... 00:04:48.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:48.970 12:57:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:48.970 12:57:10 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:48.970 12:57:10 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.970 12:57:10 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.970 ************************************ 00:04:48.970 START TEST thread_poller_perf 00:04:48.970 ************************************ 00:04:48.970 12:57:10 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:48.970 [2024-07-15 12:57:10.541290] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:48.970 [2024-07-15 12:57:10.541357] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716079 ] 00:04:48.970 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.970 [2024-07-15 12:57:10.605263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.226 [2024-07-15 12:57:10.723115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.226 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.161 ====================================== 00:04:50.161 busy:2717218360 (cyc) 00:04:50.161 total_run_count: 297000 00:04:50.161 tsc_hz: 2700000000 (cyc) 00:04:50.161 ====================================== 00:04:50.161 poller_cost: 9148 (cyc), 3388 (nsec) 00:04:50.161 00:04:50.161 real 0m1.330s 00:04:50.161 user 0m1.239s 00:04:50.161 sys 0m0.086s 00:04:50.161 12:57:11 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.161 12:57:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.161 ************************************ 00:04:50.161 END TEST thread_poller_perf 00:04:50.161 ************************************ 00:04:50.426 12:57:11 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:50.426 12:57:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.426 12:57:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:50.426 12:57:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.426 12:57:11 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.426 ************************************ 00:04:50.426 START TEST thread_poller_perf 00:04:50.426 ************************************ 00:04:50.426 12:57:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.426 [2024-07-15 12:57:11.913563] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:50.426 [2024-07-15 12:57:11.913627] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716353 ] 00:04:50.426 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.426 [2024-07-15 12:57:11.972982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.426 [2024-07-15 12:57:12.093136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.426 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:51.878 ====================================== 00:04:51.878 busy:2702430327 (cyc) 00:04:51.878 total_run_count: 3705000 00:04:51.878 tsc_hz: 2700000000 (cyc) 00:04:51.878 ====================================== 00:04:51.878 poller_cost: 729 (cyc), 270 (nsec) 00:04:51.878 00:04:51.878 real 0m1.314s 00:04:51.878 user 0m1.226s 00:04:51.878 sys 0m0.083s 00:04:51.878 12:57:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.878 12:57:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.878 ************************************ 00:04:51.878 END TEST thread_poller_perf 00:04:51.878 ************************************ 00:04:51.878 12:57:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:51.878 12:57:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:51.878 00:04:51.878 real 0m2.789s 00:04:51.878 user 0m2.527s 00:04:51.878 sys 0m0.263s 00:04:51.878 12:57:13 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.878 12:57:13 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.878 ************************************ 00:04:51.878 END TEST thread 00:04:51.878 ************************************ 00:04:51.878 12:57:13 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.878 12:57:13 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:51.878 12:57:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.878 12:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.878 12:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:51.878 ************************************ 00:04:51.878 START TEST accel 00:04:51.878 ************************************ 00:04:51.878 12:57:13 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:51.878 * Looking for test storage... 00:04:51.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:51.878 12:57:13 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:51.878 12:57:13 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:51.878 12:57:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.878 12:57:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3716559 00:04:51.878 12:57:13 accel -- accel/accel.sh@63 -- # waitforlisten 3716559 00:04:51.878 12:57:13 accel -- common/autotest_common.sh@829 -- # '[' -z 3716559 ']' 00:04:51.878 12:57:13 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:51.878 12:57:13 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.878 12:57:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:51.878 12:57:13 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.878 12:57:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.878 12:57:13 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.878 12:57:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.878 12:57:13 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.878 12:57:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.878 12:57:13 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.878 12:57:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.878 12:57:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.878 12:57:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:51.878 12:57:13 accel -- accel/accel.sh@41 -- # jq -r . 00:04:51.878 [2024-07-15 12:57:13.392666] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:51.878 [2024-07-15 12:57:13.392752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716559 ] 00:04:51.878 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.878 [2024-07-15 12:57:13.453695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.879 [2024-07-15 12:57:13.565129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.138 12:57:13 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.138 12:57:13 accel -- common/autotest_common.sh@862 -- # return 0 00:04:52.138 12:57:13 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:52.138 12:57:13 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:52.138 12:57:13 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:52.138 12:57:13 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:52.138 12:57:13 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:52.138 12:57:13 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:52.138 12:57:13 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.138 12:57:13 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:52.138 12:57:13 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.398 12:57:13 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.398 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.398 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.398 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.399 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.399 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.399 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.399 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.399 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.399 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.399 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.399 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.399 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.399 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.399 12:57:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # IFS== 00:04:52.399 12:57:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:52.399 12:57:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:52.399 12:57:13 accel -- accel/accel.sh@75 -- # killprocess 3716559 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@948 -- # '[' -z 3716559 ']' 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@952 -- # kill -0 3716559 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@953 -- # uname 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3716559 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3716559' 00:04:52.399 killing process with pid 3716559 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@967 -- # kill 3716559 00:04:52.399 12:57:13 accel -- common/autotest_common.sh@972 -- # wait 3716559 00:04:52.659 12:57:14 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:52.919 12:57:14 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:52.919 12:57:14 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:52.919 12:57:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.919 12:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.919 12:57:14 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:52.919 12:57:14 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:52.919 12:57:14 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.919 12:57:14 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:52.919 12:57:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.919 12:57:14 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:52.919 12:57:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:52.919 12:57:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.919 12:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.919 ************************************ 00:04:52.919 START TEST accel_missing_filename 00:04:52.919 ************************************ 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.919 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:52.919 12:57:14 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:52.919 [2024-07-15 12:57:14.468627] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:52.919 [2024-07-15 12:57:14.468680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716727 ] 00:04:52.919 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.919 [2024-07-15 12:57:14.530114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.180 [2024-07-15 12:57:14.648744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.180 [2024-07-15 12:57:14.710397] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.180 [2024-07-15 12:57:14.796162] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:53.480 A filename is required. 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.480 00:04:53.480 real 0m0.471s 00:04:53.480 user 0m0.359s 00:04:53.480 sys 0m0.142s 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.480 12:57:14 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:53.480 ************************************ 00:04:53.480 END TEST accel_missing_filename 00:04:53.480 ************************************ 00:04:53.480 12:57:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:53.480 12:57:14 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:53.480 12:57:14 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:53.480 12:57:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.480 12:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:04:53.480 ************************************ 00:04:53.480 START TEST accel_compress_verify 00:04:53.480 ************************************ 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.480 12:57:14 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:53.480 12:57:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:53.480 12:57:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:53.480 12:57:14 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:53.481 12:57:14 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:53.481 12:57:14 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.481 12:57:14 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.481 12:57:14 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:53.481 12:57:14 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:53.481 12:57:14 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:53.481 [2024-07-15 12:57:14.990053] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:53.481 [2024-07-15 12:57:14.990113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716748 ] 00:04:53.481 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.481 [2024-07-15 12:57:15.054141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.741 [2024-07-15 12:57:15.176420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.741 [2024-07-15 12:57:15.237978] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.741 [2024-07-15 12:57:15.318720] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:53.741 00:04:53.741 Compression does not support the verify option, aborting. 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.741 00:04:53.741 real 0m0.463s 00:04:53.741 user 0m0.346s 00:04:53.741 sys 0m0.151s 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.741 12:57:15 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:53.741 ************************************ 00:04:53.741 END TEST accel_compress_verify 00:04:53.741 ************************************ 00:04:54.001 12:57:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.001 12:57:15 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:54.001 12:57:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:54.001 12:57:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.001 12:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.001 ************************************ 00:04:54.001 START TEST accel_wrong_workload 00:04:54.001 ************************************ 00:04:54.001 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:54.002 12:57:15 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:54.002 Unsupported workload type: foobar 00:04:54.002 [2024-07-15 12:57:15.494642] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:54.002 accel_perf options: 00:04:54.002 [-h help message] 00:04:54.002 [-q queue depth per core] 00:04:54.002 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:54.002 [-T number of threads per core 00:04:54.002 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:54.002 [-t time in seconds] 00:04:54.002 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:54.002 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:54.002 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:54.002 [-l for compress/decompress workloads, name of uncompressed input file 00:04:54.002 [-S for crc32c workload, use this seed value (default 0) 00:04:54.002 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:54.002 [-f for fill workload, use this BYTE value (default 255) 00:04:54.002 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:54.002 [-y verify result if this switch is on] 00:04:54.002 [-a tasks to allocate per core (default: same value as -q)] 00:04:54.002 Can be used to spread operations across a wider range of memory. 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.002 00:04:54.002 real 0m0.023s 00:04:54.002 user 0m0.014s 00:04:54.002 sys 0m0.008s 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.002 12:57:15 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:54.002 ************************************ 00:04:54.002 END TEST accel_wrong_workload 00:04:54.002 ************************************ 00:04:54.002 Error: writing output failed: Broken pipe 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.002 12:57:15 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.002 ************************************ 00:04:54.002 START TEST accel_negative_buffers 00:04:54.002 ************************************ 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:54.002 12:57:15 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:54.002 -x option must be non-negative. 00:04:54.002 [2024-07-15 12:57:15.564654] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:54.002 accel_perf options: 00:04:54.002 [-h help message] 00:04:54.002 [-q queue depth per core] 00:04:54.002 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:54.002 [-T number of threads per core 00:04:54.002 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:54.002 [-t time in seconds] 00:04:54.002 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:54.002 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:54.002 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:54.002 [-l for compress/decompress workloads, name of uncompressed input file 00:04:54.002 [-S for crc32c workload, use this seed value (default 0) 00:04:54.002 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:54.002 [-f for fill workload, use this BYTE value (default 255) 00:04:54.002 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:54.002 [-y verify result if this switch is on] 00:04:54.002 [-a tasks to allocate per core (default: same value as -q)] 00:04:54.002 Can be used to spread operations across a wider range of memory. 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.002 00:04:54.002 real 0m0.023s 00:04:54.002 user 0m0.012s 00:04:54.002 sys 0m0.011s 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.002 12:57:15 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:54.002 ************************************ 00:04:54.002 END TEST accel_negative_buffers 00:04:54.002 ************************************ 00:04:54.002 Error: writing output failed: Broken pipe 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.002 12:57:15 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.002 12:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.002 ************************************ 00:04:54.002 START TEST accel_crc32c 00:04:54.002 ************************************ 00:04:54.002 12:57:15 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:54.002 12:57:15 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:54.002 [2024-07-15 12:57:15.627569] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:54.002 [2024-07-15 12:57:15.627621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716940 ] 00:04:54.002 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.002 [2024-07-15 12:57:15.689013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.263 [2024-07-15 12:57:15.807900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.263 12:57:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:55.640 12:57:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:55.640 00:04:55.640 real 0m1.476s 00:04:55.640 user 0m1.324s 00:04:55.640 sys 0m0.155s 00:04:55.640 12:57:17 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.640 12:57:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:55.640 ************************************ 00:04:55.640 END TEST accel_crc32c 00:04:55.640 ************************************ 00:04:55.640 12:57:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:55.640 12:57:17 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:55.640 12:57:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:55.640 12:57:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.640 12:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.640 ************************************ 00:04:55.640 START TEST accel_crc32c_C2 00:04:55.640 ************************************ 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:55.640 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:55.640 [2024-07-15 12:57:17.153790] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:55.640 [2024-07-15 12:57:17.153852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717097 ] 00:04:55.640 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.640 [2024-07-15 12:57:17.215352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.640 [2024-07-15 12:57:17.333364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:55.898 12:57:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.279 00:04:57.279 real 0m1.464s 00:04:57.279 user 0m1.329s 00:04:57.279 sys 0m0.137s 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.279 12:57:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:57.279 ************************************ 00:04:57.279 END TEST accel_crc32c_C2 00:04:57.279 ************************************ 00:04:57.279 12:57:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.279 12:57:18 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:57.279 12:57:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:57.279 12:57:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.279 12:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.280 ************************************ 00:04:57.280 START TEST accel_copy 00:04:57.280 ************************************ 00:04:57.280 12:57:18 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:57.280 [2024-07-15 12:57:18.669110] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:57.280 [2024-07-15 12:57:18.669172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717365 ] 00:04:57.280 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.280 [2024-07-15 12:57:18.732266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.280 [2024-07-15 12:57:18.846288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.280 12:57:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:58.659 12:57:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.659 00:04:58.659 real 0m1.468s 00:04:58.659 user 0m1.326s 00:04:58.659 sys 0m0.144s 00:04:58.659 12:57:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.659 12:57:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:58.659 ************************************ 00:04:58.659 END TEST accel_copy 00:04:58.659 ************************************ 00:04:58.659 12:57:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:58.659 12:57:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:58.659 12:57:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:58.659 12:57:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.659 12:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:04:58.659 ************************************ 00:04:58.659 START TEST accel_fill 00:04:58.659 ************************************ 00:04:58.659 12:57:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:58.659 12:57:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:58.659 [2024-07-15 12:57:20.185937] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:04:58.659 [2024-07-15 12:57:20.186001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717532 ] 00:04:58.659 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.659 [2024-07-15 12:57:20.249727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.920 [2024-07-15 12:57:20.368767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:58.920 12:57:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:00.303 12:57:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.303 00:05:00.303 real 0m1.482s 00:05:00.303 user 0m1.334s 00:05:00.303 sys 0m0.151s 00:05:00.303 12:57:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.303 12:57:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:00.303 ************************************ 00:05:00.303 END TEST accel_fill 00:05:00.303 ************************************ 00:05:00.303 12:57:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:00.303 12:57:21 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:00.303 12:57:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:00.303 12:57:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.303 12:57:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.303 ************************************ 00:05:00.303 START TEST accel_copy_crc32c 00:05:00.304 ************************************ 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:00.304 [2024-07-15 12:57:21.714006] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:00.304 [2024-07-15 12:57:21.714066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717685 ] 00:05:00.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.304 [2024-07-15 12:57:21.779418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.304 [2024-07-15 12:57:21.894993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.304 12:57:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.683 00:05:01.683 real 0m1.464s 00:05:01.683 user 0m1.323s 00:05:01.683 sys 0m0.144s 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.683 12:57:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:01.683 ************************************ 00:05:01.683 END TEST accel_copy_crc32c 00:05:01.683 ************************************ 00:05:01.683 12:57:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:01.683 12:57:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:01.683 12:57:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:01.683 12:57:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.683 12:57:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.683 ************************************ 00:05:01.684 START TEST accel_copy_crc32c_C2 00:05:01.684 ************************************ 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:01.684 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:01.684 [2024-07-15 12:57:23.230548] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:01.684 [2024-07-15 12:57:23.230614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717957 ] 00:05:01.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.684 [2024-07-15 12:57:23.294066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.942 [2024-07-15 12:57:23.410835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:01.942 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.943 12:57:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.325 00:05:03.325 real 0m1.463s 00:05:03.325 user 0m1.316s 00:05:03.325 sys 0m0.150s 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.325 12:57:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:03.325 ************************************ 00:05:03.325 END TEST accel_copy_crc32c_C2 00:05:03.325 ************************************ 00:05:03.325 12:57:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.325 12:57:24 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:03.325 12:57:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:03.325 12:57:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.325 12:57:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.325 ************************************ 00:05:03.325 START TEST accel_dualcast 00:05:03.325 ************************************ 00:05:03.325 12:57:24 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:03.325 [2024-07-15 12:57:24.748237] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:03.325 [2024-07-15 12:57:24.748310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718120 ] 00:05:03.325 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.325 [2024-07-15 12:57:24.811774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.325 [2024-07-15 12:57:24.927810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.325 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.326 12:57:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:04.702 12:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:04.703 12:57:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.703 00:05:04.703 real 0m1.479s 00:05:04.703 user 0m1.343s 00:05:04.703 sys 0m0.138s 00:05:04.703 12:57:26 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.703 12:57:26 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:04.703 ************************************ 00:05:04.703 END TEST accel_dualcast 00:05:04.703 ************************************ 00:05:04.703 12:57:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.703 12:57:26 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:04.703 12:57:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:04.703 12:57:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.703 12:57:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.703 ************************************ 00:05:04.703 START TEST accel_compare 00:05:04.703 ************************************ 00:05:04.703 12:57:26 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:04.703 12:57:26 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:04.703 [2024-07-15 12:57:26.269337] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:04.703 [2024-07-15 12:57:26.269389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718277 ] 00:05:04.703 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.703 [2024-07-15 12:57:26.330664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.961 [2024-07-15 12:57:26.449342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.961 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.962 12:57:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:06.347 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.348 12:57:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:06.348 12:57:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.348 00:05:06.348 real 0m1.464s 00:05:06.348 user 0m1.328s 00:05:06.348 sys 0m0.138s 00:05:06.348 12:57:27 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.348 12:57:27 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:06.348 ************************************ 00:05:06.348 END TEST accel_compare 00:05:06.348 ************************************ 00:05:06.348 12:57:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.348 12:57:27 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:06.348 12:57:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:06.348 12:57:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.348 12:57:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.348 ************************************ 00:05:06.348 START TEST accel_xor 00:05:06.348 ************************************ 00:05:06.348 12:57:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:06.348 12:57:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:06.348 [2024-07-15 12:57:27.780344] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:06.348 [2024-07-15 12:57:27.780396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718545 ] 00:05:06.348 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.348 [2024-07-15 12:57:27.842333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.348 [2024-07-15 12:57:27.964487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.348 12:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.775 00:05:07.775 real 0m1.457s 00:05:07.775 user 0m1.324s 00:05:07.775 sys 0m0.135s 00:05:07.775 12:57:29 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.775 12:57:29 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:07.775 ************************************ 00:05:07.775 END TEST accel_xor 00:05:07.775 ************************************ 00:05:07.775 12:57:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.775 12:57:29 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:07.775 12:57:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:07.775 12:57:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.775 12:57:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.775 ************************************ 00:05:07.775 START TEST accel_xor 00:05:07.775 ************************************ 00:05:07.775 12:57:29 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:07.775 12:57:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:07.775 [2024-07-15 12:57:29.289473] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:07.775 [2024-07-15 12:57:29.289535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718714 ] 00:05:07.775 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.776 [2024-07-15 12:57:29.351977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.776 [2024-07-15 12:57:29.472979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.035 12:57:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:09.414 12:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.414 00:05:09.414 real 0m1.490s 00:05:09.414 user 0m1.341s 00:05:09.414 sys 0m0.151s 00:05:09.414 12:57:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.414 12:57:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:09.414 ************************************ 00:05:09.414 END TEST accel_xor 00:05:09.414 ************************************ 00:05:09.414 12:57:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:09.414 12:57:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:09.414 12:57:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:09.414 12:57:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.414 12:57:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.414 ************************************ 00:05:09.414 START TEST accel_dif_verify 00:05:09.414 ************************************ 00:05:09.414 12:57:30 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:09.414 12:57:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:09.414 [2024-07-15 12:57:30.832638] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:09.414 [2024-07-15 12:57:30.832700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718867 ] 00:05:09.414 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.414 [2024-07-15 12:57:30.896067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.414 [2024-07-15 12:57:31.018212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.414 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.415 12:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:10.789 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:10.790 12:57:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.790 00:05:10.790 real 0m1.483s 00:05:10.790 user 0m1.347s 00:05:10.790 sys 0m0.139s 00:05:10.790 12:57:32 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.790 12:57:32 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:10.790 ************************************ 00:05:10.790 END TEST accel_dif_verify 00:05:10.790 ************************************ 00:05:10.790 12:57:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.790 12:57:32 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:10.790 12:57:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:10.790 12:57:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.790 12:57:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.790 ************************************ 00:05:10.790 START TEST accel_dif_generate 00:05:10.790 ************************************ 00:05:10.790 12:57:32 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:10.790 12:57:32 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:10.790 [2024-07-15 12:57:32.361100] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:10.790 [2024-07-15 12:57:32.361165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719139 ] 00:05:10.790 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.790 [2024-07-15 12:57:32.427555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.049 [2024-07-15 12:57:32.547791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.049 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 12:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:12.427 12:57:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.427 00:05:12.427 real 0m1.482s 00:05:12.427 user 0m1.335s 00:05:12.427 sys 0m0.151s 00:05:12.427 12:57:33 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.427 12:57:33 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:12.427 ************************************ 00:05:12.427 END TEST accel_dif_generate 00:05:12.427 ************************************ 00:05:12.427 12:57:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.427 12:57:33 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:12.427 12:57:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:12.427 12:57:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.427 12:57:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.427 ************************************ 00:05:12.427 START TEST accel_dif_generate_copy 00:05:12.427 ************************************ 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:12.427 12:57:33 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:12.427 [2024-07-15 12:57:33.888146] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:12.427 [2024-07-15 12:57:33.888223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719301 ] 00:05:12.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.427 [2024-07-15 12:57:33.954284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.427 [2024-07-15 12:57:34.076551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.687 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.688 12:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.065 00:05:14.065 real 0m1.491s 00:05:14.065 user 0m1.349s 00:05:14.065 sys 0m0.144s 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.065 12:57:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:14.065 ************************************ 00:05:14.065 END TEST accel_dif_generate_copy 00:05:14.065 ************************************ 00:05:14.065 12:57:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.065 12:57:35 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:14.065 12:57:35 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.065 12:57:35 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:14.065 12:57:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.065 12:57:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.065 ************************************ 00:05:14.065 START TEST accel_comp 00:05:14.065 ************************************ 00:05:14.065 12:57:35 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.065 12:57:35 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:14.066 [2024-07-15 12:57:35.428401] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:14.066 [2024-07-15 12:57:35.428468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719459 ] 00:05:14.066 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.066 [2024-07-15 12:57:35.492028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.066 [2024-07-15 12:57:35.613912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.066 12:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:15.495 12:57:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:15.495 12:57:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.495 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:15.495 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:15.496 12:57:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.496 00:05:15.496 real 0m1.489s 00:05:15.496 user 0m1.341s 00:05:15.496 sys 0m0.151s 00:05:15.496 12:57:36 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.496 12:57:36 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:15.496 ************************************ 00:05:15.496 END TEST accel_comp 00:05:15.496 ************************************ 00:05:15.496 12:57:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.496 12:57:36 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.496 12:57:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:15.496 12:57:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.496 12:57:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.496 ************************************ 00:05:15.496 START TEST accel_decomp 00:05:15.496 ************************************ 00:05:15.496 12:57:36 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:15.496 12:57:36 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:15.496 [2024-07-15 12:57:36.961247] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:15.496 [2024-07-15 12:57:36.961315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719727 ] 00:05:15.496 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.496 [2024-07-15 12:57:37.023926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.496 [2024-07-15 12:57:37.146278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.756 12:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.134 12:57:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.134 00:05:17.134 real 0m1.487s 00:05:17.134 user 0m1.337s 00:05:17.134 sys 0m0.153s 00:05:17.134 12:57:38 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.134 12:57:38 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:17.134 ************************************ 00:05:17.134 END TEST accel_decomp 00:05:17.134 ************************************ 00:05:17.134 12:57:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.134 12:57:38 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:17.134 12:57:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:17.134 12:57:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.134 12:57:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.134 ************************************ 00:05:17.134 START TEST accel_decomp_full 00:05:17.134 ************************************ 00:05:17.134 12:57:38 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.134 12:57:38 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:17.135 [2024-07-15 12:57:38.490264] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:17.135 [2024-07-15 12:57:38.490330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719893 ] 00:05:17.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.135 [2024-07-15 12:57:38.552347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.135 [2024-07-15 12:57:38.675019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.135 12:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.514 12:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.514 12:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.514 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.514 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.514 12:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.514 12:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.514 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.515 12:57:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.515 00:05:18.515 real 0m1.507s 00:05:18.515 user 0m1.359s 00:05:18.515 sys 0m0.150s 00:05:18.515 12:57:39 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.515 12:57:39 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:18.515 ************************************ 00:05:18.515 END TEST accel_decomp_full 00:05:18.515 ************************************ 00:05:18.515 12:57:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.515 12:57:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.515 12:57:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:18.515 12:57:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.515 12:57:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.515 ************************************ 00:05:18.515 START TEST accel_decomp_mcore 00:05:18.515 ************************************ 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:18.515 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:18.515 [2024-07-15 12:57:40.041860] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:18.515 [2024-07-15 12:57:40.041956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720056 ] 00:05:18.515 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.515 [2024-07-15 12:57:40.106747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.774 [2024-07-15 12:57:40.232996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.774 [2024-07-15 12:57:40.233053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.774 [2024-07-15 12:57:40.233104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.774 [2024-07-15 12:57:40.233108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.774 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.774 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.775 12:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.154 00:05:20.154 real 0m1.503s 00:05:20.154 user 0m4.813s 00:05:20.154 sys 0m0.159s 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.154 12:57:41 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:20.154 ************************************ 00:05:20.154 END TEST accel_decomp_mcore 00:05:20.154 ************************************ 00:05:20.154 12:57:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.154 12:57:41 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.154 12:57:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:20.154 12:57:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.154 12:57:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.154 ************************************ 00:05:20.154 START TEST accel_decomp_full_mcore 00:05:20.154 ************************************ 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.154 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.155 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:20.155 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:20.155 [2024-07-15 12:57:41.594846] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:20.155 [2024-07-15 12:57:41.594941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720324 ] 00:05:20.155 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.155 [2024-07-15 12:57:41.662978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.155 [2024-07-15 12:57:41.788829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.155 [2024-07-15 12:57:41.788897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.155 [2024-07-15 12:57:41.788933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.155 [2024-07-15 12:57:41.788936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.155 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.414 12:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.803 00:05:21.803 real 0m1.515s 00:05:21.803 user 0m4.868s 00:05:21.803 sys 0m0.154s 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.803 12:57:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:21.803 ************************************ 00:05:21.803 END TEST accel_decomp_full_mcore 00:05:21.803 ************************************ 00:05:21.803 12:57:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.803 12:57:43 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.803 12:57:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:21.803 12:57:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.803 12:57:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.803 ************************************ 00:05:21.803 START TEST accel_decomp_mthread 00:05:21.803 ************************************ 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:21.803 [2024-07-15 12:57:43.159175] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:21.803 [2024-07-15 12:57:43.159241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720487 ] 00:05:21.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.803 [2024-07-15 12:57:43.224763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.803 [2024-07-15 12:57:43.347638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.803 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.804 12:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.236 00:05:23.236 real 0m1.498s 00:05:23.236 user 0m1.345s 00:05:23.236 sys 0m0.155s 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.236 12:57:44 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:23.236 ************************************ 00:05:23.236 END TEST accel_decomp_mthread 00:05:23.236 ************************************ 00:05:23.236 12:57:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.236 12:57:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.236 12:57:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:23.236 12:57:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.236 12:57:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.236 ************************************ 00:05:23.236 START TEST accel_decomp_full_mthread 00:05:23.236 ************************************ 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:23.236 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:23.236 [2024-07-15 12:57:44.707373] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:23.236 [2024-07-15 12:57:44.707426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720761 ] 00:05:23.236 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.236 [2024-07-15 12:57:44.768506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.236 [2024-07-15 12:57:44.891407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.496 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.497 12:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.869 00:05:24.869 real 0m1.525s 00:05:24.869 user 0m1.377s 00:05:24.869 sys 0m0.150s 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.869 12:57:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:24.869 ************************************ 00:05:24.869 END TEST accel_decomp_full_mthread 00:05:24.869 ************************************ 00:05:24.869 12:57:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.869 12:57:46 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:24.869 12:57:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:24.869 12:57:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:24.869 12:57:46 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:24.869 12:57:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.869 12:57:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.869 12:57:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.869 12:57:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.869 12:57:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.869 12:57:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.869 12:57:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.869 12:57:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:24.869 12:57:46 accel -- accel/accel.sh@41 -- # jq -r . 00:05:24.869 ************************************ 00:05:24.869 START TEST accel_dif_functional_tests 00:05:24.869 ************************************ 00:05:24.869 12:57:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:24.869 [2024-07-15 12:57:46.302940] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:24.869 [2024-07-15 12:57:46.303003] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720921 ] 00:05:24.869 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.869 [2024-07-15 12:57:46.363484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.869 [2024-07-15 12:57:46.489490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.869 [2024-07-15 12:57:46.489544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.869 [2024-07-15 12:57:46.489548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.128 00:05:25.128 00:05:25.128 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.128 http://cunit.sourceforge.net/ 00:05:25.128 00:05:25.128 00:05:25.128 Suite: accel_dif 00:05:25.128 Test: verify: DIF generated, GUARD check ...passed 00:05:25.128 Test: verify: DIF generated, APPTAG check ...passed 00:05:25.128 Test: verify: DIF generated, REFTAG check ...passed 00:05:25.128 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:57:46.591194] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:25.128 passed 00:05:25.128 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:57:46.591266] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:25.128 passed 00:05:25.128 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 12:57:46.591305] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:25.128 passed 00:05:25.128 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:25.128 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:57:46.591380] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:25.128 passed 00:05:25.128 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:25.128 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:25.128 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:25.128 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:57:46.591543] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:25.128 passed 00:05:25.128 Test: verify copy: DIF generated, GUARD check ...passed 00:05:25.128 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:25.128 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:25.128 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:57:46.591718] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:25.128 passed 00:05:25.128 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:57:46.591761] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:25.128 passed 00:05:25.128 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:57:46.591799] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:25.128 passed 00:05:25.128 Test: generate copy: DIF generated, GUARD check ...passed 00:05:25.128 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:25.128 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:25.128 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:25.128 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:25.128 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:25.128 Test: generate copy: iovecs-len validate ...[2024-07-15 12:57:46.592070] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:25.128 passed 00:05:25.128 Test: generate copy: buffer alignment validate ...passed 00:05:25.128 00:05:25.128 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.128 suites 1 1 n/a 0 0 00:05:25.128 tests 26 26 26 0 0 00:05:25.128 asserts 115 115 115 0 n/a 00:05:25.128 00:05:25.128 Elapsed time = 0.003 seconds 00:05:25.386 00:05:25.386 real 0m0.587s 00:05:25.386 user 0m0.871s 00:05:25.386 sys 0m0.184s 00:05:25.386 12:57:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.386 12:57:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:25.386 ************************************ 00:05:25.386 END TEST accel_dif_functional_tests 00:05:25.386 ************************************ 00:05:25.386 12:57:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.386 00:05:25.386 real 0m33.585s 00:05:25.386 user 0m36.980s 00:05:25.386 sys 0m4.660s 00:05:25.386 12:57:46 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.387 12:57:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.387 ************************************ 00:05:25.387 END TEST accel 00:05:25.387 ************************************ 00:05:25.387 12:57:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.387 12:57:46 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:25.387 12:57:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.387 12:57:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.387 12:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:25.387 ************************************ 00:05:25.387 START TEST accel_rpc 00:05:25.387 ************************************ 00:05:25.387 12:57:46 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:25.387 * Looking for test storage... 00:05:25.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:25.387 12:57:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.387 12:57:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3721107 00:05:25.387 12:57:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:25.387 12:57:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3721107 00:05:25.387 12:57:46 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3721107 ']' 00:05:25.387 12:57:46 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.387 12:57:46 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.387 12:57:46 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.387 12:57:46 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.387 12:57:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.387 [2024-07-15 12:57:47.018182] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:25.387 [2024-07-15 12:57:47.018283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721107 ] 00:05:25.387 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.387 [2024-07-15 12:57:47.079789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.643 [2024-07-15 12:57:47.201163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.643 12:57:47 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.643 12:57:47 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.643 12:57:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:25.643 12:57:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:25.643 12:57:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:25.643 12:57:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:25.643 12:57:47 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:25.643 12:57:47 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.643 12:57:47 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.643 12:57:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.643 ************************************ 00:05:25.643 START TEST accel_assign_opcode 00:05:25.643 ************************************ 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.643 [2024-07-15 12:57:47.289887] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.643 [2024-07-15 12:57:47.297895] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.643 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.901 software 00:05:25.901 00:05:25.901 real 0m0.302s 00:05:25.901 user 0m0.045s 00:05:25.901 sys 0m0.005s 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.901 12:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:25.901 ************************************ 00:05:25.901 END TEST accel_assign_opcode 00:05:25.901 ************************************ 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.159 12:57:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3721107 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3721107 ']' 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3721107 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721107 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721107' 00:05:26.159 killing process with pid 3721107 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@967 -- # kill 3721107 00:05:26.159 12:57:47 accel_rpc -- common/autotest_common.sh@972 -- # wait 3721107 00:05:26.417 00:05:26.417 real 0m1.184s 00:05:26.417 user 0m1.152s 00:05:26.417 sys 0m0.435s 00:05:26.417 12:57:48 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.417 12:57:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.417 ************************************ 00:05:26.417 END TEST accel_rpc 00:05:26.417 ************************************ 00:05:26.675 12:57:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.675 12:57:48 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:26.675 12:57:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.675 12:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.675 12:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:26.675 ************************************ 00:05:26.675 START TEST app_cmdline 00:05:26.675 ************************************ 00:05:26.675 12:57:48 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:26.675 * Looking for test storage... 00:05:26.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:26.675 12:57:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:26.675 12:57:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3721315 00:05:26.675 12:57:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:26.675 12:57:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3721315 00:05:26.675 12:57:48 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3721315 ']' 00:05:26.675 12:57:48 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.675 12:57:48 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.675 12:57:48 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.675 12:57:48 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.675 12:57:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.675 [2024-07-15 12:57:48.261414] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:26.675 [2024-07-15 12:57:48.261493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721315 ] 00:05:26.675 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.675 [2024-07-15 12:57:48.317534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.935 [2024-07-15 12:57:48.423751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.196 12:57:48 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.196 12:57:48 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:27.196 12:57:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:27.454 { 00:05:27.454 "version": "SPDK v24.09-pre git sha1 417133c03", 00:05:27.454 "fields": { 00:05:27.454 "major": 24, 00:05:27.454 "minor": 9, 00:05:27.454 "patch": 0, 00:05:27.454 "suffix": "-pre", 00:05:27.454 "commit": "417133c03" 00:05:27.454 } 00:05:27.454 } 00:05:27.454 12:57:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:27.454 12:57:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:27.454 12:57:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:27.455 12:57:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:27.455 12:57:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.455 12:57:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.455 12:57:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.455 12:57:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:27.455 12:57:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:27.455 12:57:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:27.455 12:57:48 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.724 request: 00:05:27.724 { 00:05:27.724 "method": "env_dpdk_get_mem_stats", 00:05:27.724 "req_id": 1 00:05:27.724 } 00:05:27.724 Got JSON-RPC error response 00:05:27.724 response: 00:05:27.724 { 00:05:27.724 "code": -32601, 00:05:27.724 "message": "Method not found" 00:05:27.724 } 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.724 12:57:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3721315 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3721315 ']' 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3721315 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721315 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.724 12:57:49 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721315' 00:05:27.724 killing process with pid 3721315 00:05:27.725 12:57:49 app_cmdline -- common/autotest_common.sh@967 -- # kill 3721315 00:05:27.725 12:57:49 app_cmdline -- common/autotest_common.sh@972 -- # wait 3721315 00:05:28.291 00:05:28.291 real 0m1.575s 00:05:28.291 user 0m1.893s 00:05:28.291 sys 0m0.449s 00:05:28.291 12:57:49 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.291 12:57:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 ************************************ 00:05:28.291 END TEST app_cmdline 00:05:28.291 ************************************ 00:05:28.291 12:57:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.291 12:57:49 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:28.291 12:57:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.291 12:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.291 12:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 ************************************ 00:05:28.291 START TEST version 00:05:28.291 ************************************ 00:05:28.291 12:57:49 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:28.291 * Looking for test storage... 00:05:28.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:28.291 12:57:49 version -- app/version.sh@17 -- # get_header_version major 00:05:28.291 12:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # cut -f2 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.291 12:57:49 version -- app/version.sh@17 -- # major=24 00:05:28.291 12:57:49 version -- app/version.sh@18 -- # get_header_version minor 00:05:28.291 12:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # cut -f2 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.291 12:57:49 version -- app/version.sh@18 -- # minor=9 00:05:28.291 12:57:49 version -- app/version.sh@19 -- # get_header_version patch 00:05:28.291 12:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # cut -f2 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.291 12:57:49 version -- app/version.sh@19 -- # patch=0 00:05:28.291 12:57:49 version -- app/version.sh@20 -- # get_header_version suffix 00:05:28.291 12:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # cut -f2 00:05:28.291 12:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.291 12:57:49 version -- app/version.sh@20 -- # suffix=-pre 00:05:28.291 12:57:49 version -- app/version.sh@22 -- # version=24.9 00:05:28.291 12:57:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:28.291 12:57:49 version -- app/version.sh@28 -- # version=24.9rc0 00:05:28.291 12:57:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:28.291 12:57:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:28.291 12:57:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:28.291 12:57:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:28.291 00:05:28.291 real 0m0.108s 00:05:28.291 user 0m0.056s 00:05:28.291 sys 0m0.073s 00:05:28.291 12:57:49 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.291 12:57:49 version -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 ************************************ 00:05:28.291 END TEST version 00:05:28.291 ************************************ 00:05:28.291 12:57:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.291 12:57:49 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@198 -- # uname -s 00:05:28.291 12:57:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:28.291 12:57:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:28.291 12:57:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:28.291 12:57:49 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:28.291 12:57:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.291 12:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 12:57:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:28.291 12:57:49 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:28.291 12:57:49 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:28.291 12:57:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:28.291 12:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.291 12:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 ************************************ 00:05:28.291 START TEST nvmf_tcp 00:05:28.291 ************************************ 00:05:28.291 12:57:49 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:28.554 * Looking for test storage... 00:05:28.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.554 12:57:50 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.554 12:57:50 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.554 12:57:50 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.554 12:57:50 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.554 12:57:50 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.554 12:57:50 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.554 12:57:50 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.554 12:57:50 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:28.555 12:57:50 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:28.555 12:57:50 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.555 12:57:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:28.555 12:57:50 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:28.555 12:57:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:28.555 12:57:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.555 12:57:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.555 ************************************ 00:05:28.555 START TEST nvmf_example 00:05:28.555 ************************************ 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:28.555 * Looking for test storage... 00:05:28.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:28.555 12:57:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:30.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:30.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:30.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:30.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:30.461 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:30.462 12:57:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:30.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:30.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:05:30.462 00:05:30.462 --- 10.0.0.2 ping statistics --- 00:05:30.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:30.462 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:30.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:30.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:05:30.462 00:05:30.462 --- 10.0.0.1 ping statistics --- 00:05:30.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:30.462 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3723219 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3723219 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3723219 ']' 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.462 12:57:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.656 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:31.657 12:57:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:31.657 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.868 Initializing NVMe Controllers 00:05:43.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:43.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:43.868 Initialization complete. Launching workers. 00:05:43.868 ======================================================== 00:05:43.868 Latency(us) 00:05:43.868 Device Information : IOPS MiB/s Average min max 00:05:43.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14115.25 55.14 4533.55 763.54 15888.35 00:05:43.868 ======================================================== 00:05:43.868 Total : 14115.25 55.14 4533.55 763.54 15888.35 00:05:43.868 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:43.868 rmmod nvme_tcp 00:05:43.868 rmmod nvme_fabrics 00:05:43.868 rmmod nvme_keyring 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3723219 ']' 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3723219 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3723219 ']' 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3723219 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723219 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723219' 00:05:43.868 killing process with pid 3723219 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3723219 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3723219 00:05:43.868 nvmf threads initialize successfully 00:05:43.868 bdev subsystem init successfully 00:05:43.868 created a nvmf target service 00:05:43.868 create targets's poll groups done 00:05:43.868 all subsystems of target started 00:05:43.868 nvmf target is running 00:05:43.868 all subsystems of target stopped 00:05:43.868 destroy targets's poll groups done 00:05:43.868 destroyed the nvmf target service 00:05:43.868 bdev subsystem finish successfully 00:05:43.868 nvmf threads destroy successfully 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:43.868 12:58:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.124 12:58:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:44.124 12:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:44.124 12:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.124 12:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.124 00:05:44.124 real 0m15.730s 00:05:44.124 user 0m40.787s 00:05:44.124 sys 0m4.744s 00:05:44.124 12:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.124 12:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.124 ************************************ 00:05:44.124 END TEST nvmf_example 00:05:44.124 ************************************ 00:05:44.124 12:58:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:44.124 12:58:05 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:44.124 12:58:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:44.124 12:58:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.124 12:58:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.383 ************************************ 00:05:44.383 START TEST nvmf_filesystem 00:05:44.383 ************************************ 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:44.383 * Looking for test storage... 00:05:44.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:44.383 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:44.384 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:44.384 #define SPDK_CONFIG_H 00:05:44.384 #define SPDK_CONFIG_APPS 1 00:05:44.384 #define SPDK_CONFIG_ARCH native 00:05:44.384 #undef SPDK_CONFIG_ASAN 00:05:44.384 #undef SPDK_CONFIG_AVAHI 00:05:44.384 #undef SPDK_CONFIG_CET 00:05:44.384 #define SPDK_CONFIG_COVERAGE 1 00:05:44.384 #define SPDK_CONFIG_CROSS_PREFIX 00:05:44.384 #undef SPDK_CONFIG_CRYPTO 00:05:44.384 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:44.384 #undef SPDK_CONFIG_CUSTOMOCF 00:05:44.385 #undef SPDK_CONFIG_DAOS 00:05:44.385 #define SPDK_CONFIG_DAOS_DIR 00:05:44.385 #define SPDK_CONFIG_DEBUG 1 00:05:44.385 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:44.385 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:44.385 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:44.385 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:44.385 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:44.385 #undef SPDK_CONFIG_DPDK_UADK 00:05:44.385 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:44.385 #define SPDK_CONFIG_EXAMPLES 1 00:05:44.385 #undef SPDK_CONFIG_FC 00:05:44.385 #define SPDK_CONFIG_FC_PATH 00:05:44.385 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:44.385 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:44.385 #undef SPDK_CONFIG_FUSE 00:05:44.385 #undef SPDK_CONFIG_FUZZER 00:05:44.385 #define SPDK_CONFIG_FUZZER_LIB 00:05:44.385 #undef SPDK_CONFIG_GOLANG 00:05:44.385 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:44.385 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:44.385 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:44.385 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:44.385 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:44.385 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:44.385 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:44.385 #define SPDK_CONFIG_IDXD 1 00:05:44.385 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:44.385 #undef SPDK_CONFIG_IPSEC_MB 00:05:44.385 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:44.385 #define SPDK_CONFIG_ISAL 1 00:05:44.385 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:44.385 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:44.385 #define SPDK_CONFIG_LIBDIR 00:05:44.385 #undef SPDK_CONFIG_LTO 00:05:44.385 #define SPDK_CONFIG_MAX_LCORES 128 00:05:44.385 #define SPDK_CONFIG_NVME_CUSE 1 00:05:44.385 #undef SPDK_CONFIG_OCF 00:05:44.385 #define SPDK_CONFIG_OCF_PATH 00:05:44.385 #define SPDK_CONFIG_OPENSSL_PATH 00:05:44.385 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:44.385 #define SPDK_CONFIG_PGO_DIR 00:05:44.385 #undef SPDK_CONFIG_PGO_USE 00:05:44.385 #define SPDK_CONFIG_PREFIX /usr/local 00:05:44.385 #undef SPDK_CONFIG_RAID5F 00:05:44.385 #undef SPDK_CONFIG_RBD 00:05:44.385 #define SPDK_CONFIG_RDMA 1 00:05:44.385 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:44.385 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:44.385 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:44.385 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:44.385 #define SPDK_CONFIG_SHARED 1 00:05:44.385 #undef SPDK_CONFIG_SMA 00:05:44.385 #define SPDK_CONFIG_TESTS 1 00:05:44.385 #undef SPDK_CONFIG_TSAN 00:05:44.385 #define SPDK_CONFIG_UBLK 1 00:05:44.385 #define SPDK_CONFIG_UBSAN 1 00:05:44.385 #undef SPDK_CONFIG_UNIT_TESTS 00:05:44.385 #undef SPDK_CONFIG_URING 00:05:44.385 #define SPDK_CONFIG_URING_PATH 00:05:44.385 #undef SPDK_CONFIG_URING_ZNS 00:05:44.385 #undef SPDK_CONFIG_USDT 00:05:44.385 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:44.385 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:44.385 #define SPDK_CONFIG_VFIO_USER 1 00:05:44.385 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:44.385 #define SPDK_CONFIG_VHOST 1 00:05:44.385 #define SPDK_CONFIG_VIRTIO 1 00:05:44.385 #undef SPDK_CONFIG_VTUNE 00:05:44.385 #define SPDK_CONFIG_VTUNE_DIR 00:05:44.385 #define SPDK_CONFIG_WERROR 1 00:05:44.385 #define SPDK_CONFIG_WPDK_DIR 00:05:44.385 #undef SPDK_CONFIG_XNVME 00:05:44.385 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.385 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:44.386 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3724926 ]] 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3724926 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:44.387 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.b4Upmu 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.b4Upmu/tests/target /tmp/spdk.b4Upmu 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55499304960 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6495387648 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996393984 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=954368 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:44.388 * Looking for test storage... 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55499304960 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8709980160 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.388 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:44.389 12:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.389 12:58:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:44.389 12:58:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:44.389 12:58:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:44.389 12:58:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:46.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:46.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.349 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:46.350 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:46.350 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:46.350 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.350 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.350 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.350 12:58:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:46.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:46.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:46.350 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:46.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:46.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:05:46.609 00:05:46.609 --- 10.0.0.2 ping statistics --- 00:05:46.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.609 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:46.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:46.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:05:46.609 00:05:46.609 --- 10.0.0.1 ping statistics --- 00:05:46.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.609 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:46.609 ************************************ 00:05:46.609 START TEST nvmf_filesystem_no_in_capsule 00:05:46.609 ************************************ 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3726553 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3726553 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3726553 ']' 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.609 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.610 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.610 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.610 [2024-07-15 12:58:08.239940] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:46.610 [2024-07-15 12:58:08.240014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:46.610 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.869 [2024-07-15 12:58:08.313377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.869 [2024-07-15 12:58:08.442240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:46.869 [2024-07-15 12:58:08.442299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:46.869 [2024-07-15 12:58:08.442316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.869 [2024-07-15 12:58:08.442329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.869 [2024-07-15 12:58:08.442341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:46.869 [2024-07-15 12:58:08.442427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.869 [2024-07-15 12:58:08.442481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.869 [2024-07-15 12:58:08.442505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.869 [2024-07-15 12:58:08.442509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.869 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.869 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:46.869 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:46.869 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.869 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 [2024-07-15 12:58:08.596604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 Malloc1 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 [2024-07-15 12:58:08.788530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.127 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:47.127 { 00:05:47.127 "name": "Malloc1", 00:05:47.127 "aliases": [ 00:05:47.127 "b3c76fed-c1c9-4974-9a74-5bc79999cc6e" 00:05:47.127 ], 00:05:47.127 "product_name": "Malloc disk", 00:05:47.127 "block_size": 512, 00:05:47.127 "num_blocks": 1048576, 00:05:47.127 "uuid": "b3c76fed-c1c9-4974-9a74-5bc79999cc6e", 00:05:47.127 "assigned_rate_limits": { 00:05:47.127 "rw_ios_per_sec": 0, 00:05:47.127 "rw_mbytes_per_sec": 0, 00:05:47.127 "r_mbytes_per_sec": 0, 00:05:47.127 "w_mbytes_per_sec": 0 00:05:47.127 }, 00:05:47.127 "claimed": true, 00:05:47.127 "claim_type": "exclusive_write", 00:05:47.127 "zoned": false, 00:05:47.127 "supported_io_types": { 00:05:47.127 "read": true, 00:05:47.127 "write": true, 00:05:47.127 "unmap": true, 00:05:47.127 "flush": true, 00:05:47.127 "reset": true, 00:05:47.127 "nvme_admin": false, 00:05:47.127 "nvme_io": false, 00:05:47.127 "nvme_io_md": false, 00:05:47.127 "write_zeroes": true, 00:05:47.127 "zcopy": true, 00:05:47.127 "get_zone_info": false, 00:05:47.127 "zone_management": false, 00:05:47.127 "zone_append": false, 00:05:47.127 "compare": false, 00:05:47.127 "compare_and_write": false, 00:05:47.127 "abort": true, 00:05:47.127 "seek_hole": false, 00:05:47.127 "seek_data": false, 00:05:47.127 "copy": true, 00:05:47.127 "nvme_iov_md": false 00:05:47.127 }, 00:05:47.127 "memory_domains": [ 00:05:47.127 { 00:05:47.127 "dma_device_id": "system", 00:05:47.127 "dma_device_type": 1 00:05:47.127 }, 00:05:47.127 { 00:05:47.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.127 "dma_device_type": 2 00:05:47.127 } 00:05:47.127 ], 00:05:47.127 "driver_specific": {} 00:05:47.127 } 00:05:47.127 ]' 00:05:47.128 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:47.385 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:47.385 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:47.385 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:47.385 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:47.385 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:47.385 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:47.385 12:58:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:47.950 12:58:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:47.951 12:58:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:47.951 12:58:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:47.951 12:58:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:47.951 12:58:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:50.482 12:58:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:50.741 12:58:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.689 ************************************ 00:05:51.689 START TEST filesystem_ext4 00:05:51.689 ************************************ 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:05:51.689 12:58:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:51.689 mke2fs 1.46.5 (30-Dec-2021) 00:05:51.947 Discarding device blocks: 0/522240 done 00:05:51.947 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:51.947 Filesystem UUID: 15ff2dc4-0905-414b-9501-5c0d70b9da2e 00:05:51.947 Superblock backups stored on blocks: 00:05:51.947 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:51.947 00:05:51.947 Allocating group tables: 0/64 done 00:05:51.947 Writing inode tables: 0/64 done 00:05:55.233 Creating journal (8192 blocks): done 00:05:55.233 Writing superblocks and filesystem accounting information: 0/64 done 00:05:55.233 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3726553 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:55.233 00:05:55.233 real 0m3.253s 00:05:55.233 user 0m0.017s 00:05:55.233 sys 0m0.058s 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:55.233 ************************************ 00:05:55.233 END TEST filesystem_ext4 00:05:55.233 ************************************ 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.233 ************************************ 00:05:55.233 START TEST filesystem_btrfs 00:05:55.233 ************************************ 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:05:55.233 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:55.493 btrfs-progs v6.6.2 00:05:55.493 See https://btrfs.readthedocs.io for more information. 00:05:55.493 00:05:55.493 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:55.493 NOTE: several default settings have changed in version 5.15, please make sure 00:05:55.493 this does not affect your deployments: 00:05:55.493 - DUP for metadata (-m dup) 00:05:55.493 - enabled no-holes (-O no-holes) 00:05:55.493 - enabled free-space-tree (-R free-space-tree) 00:05:55.493 00:05:55.493 Label: (null) 00:05:55.493 UUID: e86e2053-3907-4961-9801-a7d4f62abefe 00:05:55.493 Node size: 16384 00:05:55.493 Sector size: 4096 00:05:55.493 Filesystem size: 510.00MiB 00:05:55.493 Block group profiles: 00:05:55.493 Data: single 8.00MiB 00:05:55.493 Metadata: DUP 32.00MiB 00:05:55.493 System: DUP 8.00MiB 00:05:55.493 SSD detected: yes 00:05:55.493 Zoned device: no 00:05:55.493 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:55.493 Runtime features: free-space-tree 00:05:55.493 Checksum: crc32c 00:05:55.493 Number of devices: 1 00:05:55.493 Devices: 00:05:55.493 ID SIZE PATH 00:05:55.493 1 510.00MiB /dev/nvme0n1p1 00:05:55.493 00:05:55.493 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:05:55.493 12:58:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3726553 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:56.428 00:05:56.428 real 0m1.254s 00:05:56.428 user 0m0.015s 00:05:56.428 sys 0m0.118s 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:56.428 ************************************ 00:05:56.428 END TEST filesystem_btrfs 00:05:56.428 ************************************ 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.428 ************************************ 00:05:56.428 START TEST filesystem_xfs 00:05:56.428 ************************************ 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:05:56.428 12:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:56.428 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:56.428 = sectsz=512 attr=2, projid32bit=1 00:05:56.428 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:56.428 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:56.428 data = bsize=4096 blocks=130560, imaxpct=25 00:05:56.428 = sunit=0 swidth=0 blks 00:05:56.428 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:56.428 log =internal log bsize=4096 blocks=16384, version=2 00:05:56.428 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:56.428 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:57.367 Discarding blocks...Done. 00:05:57.367 12:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:05:57.367 12:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3726553 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:59.276 00:05:59.276 real 0m2.702s 00:05:59.276 user 0m0.018s 00:05:59.276 sys 0m0.063s 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:05:59.276 ************************************ 00:05:59.276 END TEST filesystem_xfs 00:05:59.276 ************************************ 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:59.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3726553 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3726553 ']' 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3726553 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3726553 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3726553' 00:05:59.276 killing process with pid 3726553 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3726553 00:05:59.276 12:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3726553 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:05:59.842 00:05:59.842 real 0m13.151s 00:05:59.842 user 0m50.322s 00:05:59.842 sys 0m1.861s 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.842 ************************************ 00:05:59.842 END TEST nvmf_filesystem_no_in_capsule 00:05:59.842 ************************************ 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.842 ************************************ 00:05:59.842 START TEST nvmf_filesystem_in_capsule 00:05:59.842 ************************************ 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:59.842 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3728372 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3728372 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3728372 ']' 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.843 12:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.843 [2024-07-15 12:58:21.447783] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:05:59.843 [2024-07-15 12:58:21.447897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:59.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.843 [2024-07-15 12:58:21.516779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.102 [2024-07-15 12:58:21.636111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.102 [2024-07-15 12:58:21.636175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.102 [2024-07-15 12:58:21.636201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.102 [2024-07-15 12:58:21.636213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.102 [2024-07-15 12:58:21.636233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.102 [2024-07-15 12:58:21.636336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.102 [2024-07-15 12:58:21.636398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.102 [2024-07-15 12:58:21.636453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.102 [2024-07-15 12:58:21.636456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 [2024-07-15 12:58:22.420040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 Malloc1 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 [2024-07-15 12:58:22.604297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:01.039 { 00:06:01.039 "name": "Malloc1", 00:06:01.039 "aliases": [ 00:06:01.039 "f1d33c78-87fd-4402-bb17-cd3c0282d615" 00:06:01.039 ], 00:06:01.039 "product_name": "Malloc disk", 00:06:01.039 "block_size": 512, 00:06:01.039 "num_blocks": 1048576, 00:06:01.039 "uuid": "f1d33c78-87fd-4402-bb17-cd3c0282d615", 00:06:01.039 "assigned_rate_limits": { 00:06:01.039 "rw_ios_per_sec": 0, 00:06:01.039 "rw_mbytes_per_sec": 0, 00:06:01.039 "r_mbytes_per_sec": 0, 00:06:01.039 "w_mbytes_per_sec": 0 00:06:01.039 }, 00:06:01.039 "claimed": true, 00:06:01.039 "claim_type": "exclusive_write", 00:06:01.039 "zoned": false, 00:06:01.039 "supported_io_types": { 00:06:01.039 "read": true, 00:06:01.039 "write": true, 00:06:01.039 "unmap": true, 00:06:01.039 "flush": true, 00:06:01.039 "reset": true, 00:06:01.039 "nvme_admin": false, 00:06:01.039 "nvme_io": false, 00:06:01.039 "nvme_io_md": false, 00:06:01.039 "write_zeroes": true, 00:06:01.039 "zcopy": true, 00:06:01.039 "get_zone_info": false, 00:06:01.039 "zone_management": false, 00:06:01.039 "zone_append": false, 00:06:01.039 "compare": false, 00:06:01.039 "compare_and_write": false, 00:06:01.039 "abort": true, 00:06:01.039 "seek_hole": false, 00:06:01.039 "seek_data": false, 00:06:01.039 "copy": true, 00:06:01.039 "nvme_iov_md": false 00:06:01.039 }, 00:06:01.039 "memory_domains": [ 00:06:01.039 { 00:06:01.039 "dma_device_id": "system", 00:06:01.039 "dma_device_type": 1 00:06:01.039 }, 00:06:01.039 { 00:06:01.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.039 "dma_device_type": 2 00:06:01.039 } 00:06:01.039 ], 00:06:01.039 "driver_specific": {} 00:06:01.039 } 00:06:01.039 ]' 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:01.039 12:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:01.976 12:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:01.976 12:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:01.976 12:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:01.976 12:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:01.976 12:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:03.944 12:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:04.882 12:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.262 ************************************ 00:06:06.262 START TEST filesystem_in_capsule_ext4 00:06:06.262 ************************************ 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:06.262 12:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:06.262 mke2fs 1.46.5 (30-Dec-2021) 00:06:06.262 Discarding device blocks: 0/522240 done 00:06:06.262 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:06.262 Filesystem UUID: 57cdd557-371e-465a-aa88-99d8f31d5b8e 00:06:06.262 Superblock backups stored on blocks: 00:06:06.262 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:06.262 00:06:06.262 Allocating group tables: 0/64 done 00:06:06.262 Writing inode tables: 0/64 done 00:06:06.262 Creating journal (8192 blocks): done 00:06:07.350 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:06:07.350 00:06:07.350 12:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:07.350 12:58:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:07.350 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:07.350 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:07.350 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:07.350 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3728372 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:07.609 00:06:07.609 real 0m1.501s 00:06:07.609 user 0m0.011s 00:06:07.609 sys 0m0.062s 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:07.609 ************************************ 00:06:07.609 END TEST filesystem_in_capsule_ext4 00:06:07.609 ************************************ 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.609 ************************************ 00:06:07.609 START TEST filesystem_in_capsule_btrfs 00:06:07.609 ************************************ 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:07.609 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:08.179 btrfs-progs v6.6.2 00:06:08.179 See https://btrfs.readthedocs.io for more information. 00:06:08.179 00:06:08.179 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:08.179 NOTE: several default settings have changed in version 5.15, please make sure 00:06:08.179 this does not affect your deployments: 00:06:08.179 - DUP for metadata (-m dup) 00:06:08.179 - enabled no-holes (-O no-holes) 00:06:08.179 - enabled free-space-tree (-R free-space-tree) 00:06:08.179 00:06:08.179 Label: (null) 00:06:08.179 UUID: 96be0b89-246e-4809-92a2-64e056234068 00:06:08.179 Node size: 16384 00:06:08.179 Sector size: 4096 00:06:08.179 Filesystem size: 510.00MiB 00:06:08.179 Block group profiles: 00:06:08.179 Data: single 8.00MiB 00:06:08.179 Metadata: DUP 32.00MiB 00:06:08.179 System: DUP 8.00MiB 00:06:08.179 SSD detected: yes 00:06:08.179 Zoned device: no 00:06:08.179 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:08.179 Runtime features: free-space-tree 00:06:08.179 Checksum: crc32c 00:06:08.179 Number of devices: 1 00:06:08.179 Devices: 00:06:08.179 ID SIZE PATH 00:06:08.179 1 510.00MiB /dev/nvme0n1p1 00:06:08.179 00:06:08.179 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:08.179 12:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3728372 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.748 00:06:08.748 real 0m1.193s 00:06:08.748 user 0m0.017s 00:06:08.748 sys 0m0.113s 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:08.748 ************************************ 00:06:08.748 END TEST filesystem_in_capsule_btrfs 00:06:08.748 ************************************ 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.748 ************************************ 00:06:08.748 START TEST filesystem_in_capsule_xfs 00:06:08.748 ************************************ 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:08.748 12:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:09.008 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:09.008 = sectsz=512 attr=2, projid32bit=1 00:06:09.008 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:09.008 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:09.008 data = bsize=4096 blocks=130560, imaxpct=25 00:06:09.008 = sunit=0 swidth=0 blks 00:06:09.008 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:09.008 log =internal log bsize=4096 blocks=16384, version=2 00:06:09.008 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:09.008 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:09.577 Discarding blocks...Done. 00:06:09.577 12:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:09.577 12:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3728372 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:12.117 00:06:12.117 real 0m3.308s 00:06:12.117 user 0m0.012s 00:06:12.117 sys 0m0.065s 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:12.117 ************************************ 00:06:12.117 END TEST filesystem_in_capsule_xfs 00:06:12.117 ************************************ 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:12.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:12.117 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3728372 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3728372 ']' 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3728372 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3728372 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3728372' 00:06:12.374 killing process with pid 3728372 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3728372 00:06:12.374 12:58:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3728372 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:12.939 00:06:12.939 real 0m12.956s 00:06:12.939 user 0m49.824s 00:06:12.939 sys 0m1.868s 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.939 ************************************ 00:06:12.939 END TEST nvmf_filesystem_in_capsule 00:06:12.939 ************************************ 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:12.939 rmmod nvme_tcp 00:06:12.939 rmmod nvme_fabrics 00:06:12.939 rmmod nvme_keyring 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:12.939 12:58:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.841 12:58:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:14.841 00:06:14.841 real 0m30.637s 00:06:14.841 user 1m41.104s 00:06:14.841 sys 0m5.305s 00:06:14.841 12:58:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.841 12:58:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.841 ************************************ 00:06:14.841 END TEST nvmf_filesystem 00:06:14.841 ************************************ 00:06:14.841 12:58:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:14.841 12:58:36 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:14.841 12:58:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:14.841 12:58:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.841 12:58:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.841 ************************************ 00:06:14.841 START TEST nvmf_target_discovery 00:06:14.841 ************************************ 00:06:14.841 12:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:15.099 * Looking for test storage... 00:06:15.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.099 12:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:17.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:17.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.000 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:17.001 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:17.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.001 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:17.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:06:17.261 00:06:17.261 --- 10.0.0.2 ping statistics --- 00:06:17.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.261 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:06:17.261 00:06:17.261 --- 10.0.0.1 ping statistics --- 00:06:17.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.261 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3731988 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3731988 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3731988 ']' 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.261 12:58:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.261 [2024-07-15 12:58:38.876009] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:06:17.261 [2024-07-15 12:58:38.876086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.261 [2024-07-15 12:58:38.939142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.521 [2024-07-15 12:58:39.059380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.521 [2024-07-15 12:58:39.059444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.521 [2024-07-15 12:58:39.059460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.521 [2024-07-15 12:58:39.059473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.521 [2024-07-15 12:58:39.059484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.521 [2024-07-15 12:58:39.059576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.521 [2024-07-15 12:58:39.059632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.521 [2024-07-15 12:58:39.059688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.521 [2024-07-15 12:58:39.059691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 [2024-07-15 12:58:39.895308] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 Null1 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 [2024-07-15 12:58:39.935581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 Null2 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 Null3 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 Null4 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.459 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.460 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:18.460 00:06:18.460 Discovery Log Number of Records 6, Generation counter 6 00:06:18.460 =====Discovery Log Entry 0====== 00:06:18.460 trtype: tcp 00:06:18.460 adrfam: ipv4 00:06:18.460 subtype: current discovery subsystem 00:06:18.460 treq: not required 00:06:18.460 portid: 0 00:06:18.460 trsvcid: 4420 00:06:18.460 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:18.460 traddr: 10.0.0.2 00:06:18.460 eflags: explicit discovery connections, duplicate discovery information 00:06:18.460 sectype: none 00:06:18.460 =====Discovery Log Entry 1====== 00:06:18.460 trtype: tcp 00:06:18.460 adrfam: ipv4 00:06:18.460 subtype: nvme subsystem 00:06:18.460 treq: not required 00:06:18.460 portid: 0 00:06:18.460 trsvcid: 4420 00:06:18.460 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:18.460 traddr: 10.0.0.2 00:06:18.460 eflags: none 00:06:18.460 sectype: none 00:06:18.460 =====Discovery Log Entry 2====== 00:06:18.460 trtype: tcp 00:06:18.460 adrfam: ipv4 00:06:18.460 subtype: nvme subsystem 00:06:18.460 treq: not required 00:06:18.460 portid: 0 00:06:18.460 trsvcid: 4420 00:06:18.460 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:18.460 traddr: 10.0.0.2 00:06:18.460 eflags: none 00:06:18.460 sectype: none 00:06:18.460 =====Discovery Log Entry 3====== 00:06:18.460 trtype: tcp 00:06:18.460 adrfam: ipv4 00:06:18.460 subtype: nvme subsystem 00:06:18.460 treq: not required 00:06:18.460 portid: 0 00:06:18.460 trsvcid: 4420 00:06:18.460 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:18.460 traddr: 10.0.0.2 00:06:18.460 eflags: none 00:06:18.460 sectype: none 00:06:18.460 =====Discovery Log Entry 4====== 00:06:18.460 trtype: tcp 00:06:18.460 adrfam: ipv4 00:06:18.460 subtype: nvme subsystem 00:06:18.460 treq: not required 00:06:18.460 portid: 0 00:06:18.460 trsvcid: 4420 00:06:18.460 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:18.460 traddr: 10.0.0.2 00:06:18.460 eflags: none 00:06:18.460 sectype: none 00:06:18.460 =====Discovery Log Entry 5====== 00:06:18.460 trtype: tcp 00:06:18.460 adrfam: ipv4 00:06:18.460 subtype: discovery subsystem referral 00:06:18.460 treq: not required 00:06:18.460 portid: 0 00:06:18.460 trsvcid: 4430 00:06:18.460 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:18.460 traddr: 10.0.0.2 00:06:18.460 eflags: none 00:06:18.460 sectype: none 00:06:18.460 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:18.460 Perform nvmf subsystem discovery via RPC 00:06:18.460 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:18.460 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.460 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.460 [ 00:06:18.460 { 00:06:18.460 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:18.460 "subtype": "Discovery", 00:06:18.460 "listen_addresses": [ 00:06:18.460 { 00:06:18.460 "trtype": "TCP", 00:06:18.460 "adrfam": "IPv4", 00:06:18.460 "traddr": "10.0.0.2", 00:06:18.460 "trsvcid": "4420" 00:06:18.460 } 00:06:18.460 ], 00:06:18.460 "allow_any_host": true, 00:06:18.460 "hosts": [] 00:06:18.460 }, 00:06:18.460 { 00:06:18.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:18.460 "subtype": "NVMe", 00:06:18.460 "listen_addresses": [ 00:06:18.460 { 00:06:18.460 "trtype": "TCP", 00:06:18.460 "adrfam": "IPv4", 00:06:18.460 "traddr": "10.0.0.2", 00:06:18.460 "trsvcid": "4420" 00:06:18.460 } 00:06:18.460 ], 00:06:18.460 "allow_any_host": true, 00:06:18.460 "hosts": [], 00:06:18.460 "serial_number": "SPDK00000000000001", 00:06:18.460 "model_number": "SPDK bdev Controller", 00:06:18.460 "max_namespaces": 32, 00:06:18.460 "min_cntlid": 1, 00:06:18.460 "max_cntlid": 65519, 00:06:18.460 "namespaces": [ 00:06:18.460 { 00:06:18.460 "nsid": 1, 00:06:18.460 "bdev_name": "Null1", 00:06:18.460 "name": "Null1", 00:06:18.460 "nguid": "92D34C2E4C764D26939F756A1C31825A", 00:06:18.460 "uuid": "92d34c2e-4c76-4d26-939f-756a1c31825a" 00:06:18.460 } 00:06:18.460 ] 00:06:18.460 }, 00:06:18.460 { 00:06:18.460 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:18.460 "subtype": "NVMe", 00:06:18.460 "listen_addresses": [ 00:06:18.460 { 00:06:18.460 "trtype": "TCP", 00:06:18.460 "adrfam": "IPv4", 00:06:18.460 "traddr": "10.0.0.2", 00:06:18.460 "trsvcid": "4420" 00:06:18.460 } 00:06:18.460 ], 00:06:18.460 "allow_any_host": true, 00:06:18.460 "hosts": [], 00:06:18.460 "serial_number": "SPDK00000000000002", 00:06:18.460 "model_number": "SPDK bdev Controller", 00:06:18.460 "max_namespaces": 32, 00:06:18.460 "min_cntlid": 1, 00:06:18.460 "max_cntlid": 65519, 00:06:18.460 "namespaces": [ 00:06:18.460 { 00:06:18.460 "nsid": 1, 00:06:18.460 "bdev_name": "Null2", 00:06:18.460 "name": "Null2", 00:06:18.460 "nguid": "75C020ED287C46CFA2FD2079FED71973", 00:06:18.460 "uuid": "75c020ed-287c-46cf-a2fd-2079fed71973" 00:06:18.460 } 00:06:18.460 ] 00:06:18.460 }, 00:06:18.460 { 00:06:18.460 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:18.460 "subtype": "NVMe", 00:06:18.460 "listen_addresses": [ 00:06:18.460 { 00:06:18.460 "trtype": "TCP", 00:06:18.460 "adrfam": "IPv4", 00:06:18.460 "traddr": "10.0.0.2", 00:06:18.460 "trsvcid": "4420" 00:06:18.460 } 00:06:18.460 ], 00:06:18.460 "allow_any_host": true, 00:06:18.460 "hosts": [], 00:06:18.460 "serial_number": "SPDK00000000000003", 00:06:18.460 "model_number": "SPDK bdev Controller", 00:06:18.460 "max_namespaces": 32, 00:06:18.460 "min_cntlid": 1, 00:06:18.460 "max_cntlid": 65519, 00:06:18.460 "namespaces": [ 00:06:18.460 { 00:06:18.460 "nsid": 1, 00:06:18.460 "bdev_name": "Null3", 00:06:18.460 "name": "Null3", 00:06:18.460 "nguid": "1561B4D58FFC4BCCBA18FB89BDB96AF2", 00:06:18.460 "uuid": "1561b4d5-8ffc-4bcc-ba18-fb89bdb96af2" 00:06:18.460 } 00:06:18.460 ] 00:06:18.460 }, 00:06:18.460 { 00:06:18.460 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:18.460 "subtype": "NVMe", 00:06:18.460 "listen_addresses": [ 00:06:18.460 { 00:06:18.460 "trtype": "TCP", 00:06:18.460 "adrfam": "IPv4", 00:06:18.460 "traddr": "10.0.0.2", 00:06:18.460 "trsvcid": "4420" 00:06:18.460 } 00:06:18.460 ], 00:06:18.460 "allow_any_host": true, 00:06:18.460 "hosts": [], 00:06:18.460 "serial_number": "SPDK00000000000004", 00:06:18.460 "model_number": "SPDK bdev Controller", 00:06:18.460 "max_namespaces": 32, 00:06:18.460 "min_cntlid": 1, 00:06:18.460 "max_cntlid": 65519, 00:06:18.460 "namespaces": [ 00:06:18.460 { 00:06:18.460 "nsid": 1, 00:06:18.460 "bdev_name": "Null4", 00:06:18.460 "name": "Null4", 00:06:18.460 "nguid": "E2C8536159604E34A07CCFD8C3BE628B", 00:06:18.461 "uuid": "e2c85361-5960-4e34-a07c-cfd8c3be628b" 00:06:18.461 } 00:06:18.461 ] 00:06:18.461 } 00:06:18.461 ] 00:06:18.461 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.461 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:18.461 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.461 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:18.461 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.461 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:18.749 rmmod nvme_tcp 00:06:18.749 rmmod nvme_fabrics 00:06:18.749 rmmod nvme_keyring 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3731988 ']' 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3731988 00:06:18.749 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3731988 ']' 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3731988 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3731988 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3731988' 00:06:18.750 killing process with pid 3731988 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3731988 00:06:18.750 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3731988 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:19.035 12:58:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.573 12:58:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:21.573 00:06:21.573 real 0m6.153s 00:06:21.573 user 0m7.178s 00:06:21.573 sys 0m1.880s 00:06:21.573 12:58:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.573 12:58:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:21.573 ************************************ 00:06:21.573 END TEST nvmf_target_discovery 00:06:21.573 ************************************ 00:06:21.573 12:58:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:21.573 12:58:42 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:21.573 12:58:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.573 12:58:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.573 12:58:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.573 ************************************ 00:06:21.573 START TEST nvmf_referrals 00:06:21.573 ************************************ 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:21.573 * Looking for test storage... 00:06:21.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:21.573 12:58:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:23.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:23.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:23.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:23.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:23.480 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:23.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:06:23.481 00:06:23.481 --- 10.0.0.2 ping statistics --- 00:06:23.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.481 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:06:23.481 00:06:23.481 --- 10.0.0.1 ping statistics --- 00:06:23.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.481 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3734088 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3734088 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3734088 ']' 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.481 12:58:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.481 [2024-07-15 12:58:44.909058] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:06:23.481 [2024-07-15 12:58:44.909143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.481 [2024-07-15 12:58:44.976641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.481 [2024-07-15 12:58:45.103380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.481 [2024-07-15 12:58:45.103433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.481 [2024-07-15 12:58:45.103450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.481 [2024-07-15 12:58:45.103463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.481 [2024-07-15 12:58:45.103474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.481 [2024-07-15 12:58:45.103562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.481 [2024-07-15 12:58:45.103625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.481 [2024-07-15 12:58:45.103688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.481 [2024-07-15 12:58:45.103691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 [2024-07-15 12:58:45.914109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 [2024-07-15 12:58:45.926317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:24.415 12:58:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.415 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:24.674 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.933 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:24.933 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:24.933 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:24.933 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:24.934 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.192 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:25.193 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:25.450 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:25.450 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:25.450 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:25.450 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:25.450 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:25.450 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:25.450 12:58:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:25.450 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:25.450 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:25.450 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:25.450 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.451 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:25.709 rmmod nvme_tcp 00:06:25.709 rmmod nvme_fabrics 00:06:25.709 rmmod nvme_keyring 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3734088 ']' 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3734088 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3734088 ']' 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3734088 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3734088 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3734088' 00:06:25.709 killing process with pid 3734088 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3734088 00:06:25.709 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3734088 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.970 12:58:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.506 12:58:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:28.506 00:06:28.506 real 0m6.915s 00:06:28.506 user 0m11.457s 00:06:28.506 sys 0m2.068s 00:06:28.506 12:58:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.506 12:58:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:28.506 ************************************ 00:06:28.506 END TEST nvmf_referrals 00:06:28.506 ************************************ 00:06:28.506 12:58:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:28.506 12:58:49 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:28.506 12:58:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:28.506 12:58:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.506 12:58:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.506 ************************************ 00:06:28.506 START TEST nvmf_connect_disconnect 00:06:28.506 ************************************ 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:28.506 * Looking for test storage... 00:06:28.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.506 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:28.507 12:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:30.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:30.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:30.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:30.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:30.415 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:30.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:06:30.416 00:06:30.416 --- 10.0.0.2 ping statistics --- 00:06:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.416 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:06:30.416 00:06:30.416 --- 10.0.0.1 ping statistics --- 00:06:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.416 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3736501 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3736501 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3736501 ']' 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.416 12:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.416 [2024-07-15 12:58:51.983289] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:06:30.416 [2024-07-15 12:58:51.983389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.416 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.416 [2024-07-15 12:58:52.052429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.676 [2024-07-15 12:58:52.170893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.676 [2024-07-15 12:58:52.170946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.676 [2024-07-15 12:58:52.170975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.676 [2024-07-15 12:58:52.170988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.676 [2024-07-15 12:58:52.170998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.676 [2024-07-15 12:58:52.171068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.676 [2024-07-15 12:58:52.171094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.676 [2024-07-15 12:58:52.171155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.676 [2024-07-15 12:58:52.171157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.676 [2024-07-15 12:58:52.335912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.676 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:30.935 [2024-07-15 12:58:52.392927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:30.935 12:58:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:33.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:36.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:39.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.184 rmmod nvme_tcp 00:06:45.184 rmmod nvme_fabrics 00:06:45.184 rmmod nvme_keyring 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3736501 ']' 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3736501 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3736501 ']' 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3736501 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3736501 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3736501' 00:06:45.184 killing process with pid 3736501 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3736501 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3736501 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.184 12:59:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.105 12:59:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:47.105 00:06:47.105 real 0m18.926s 00:06:47.105 user 0m56.909s 00:06:47.105 sys 0m3.263s 00:06:47.105 12:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.105 12:59:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:47.105 ************************************ 00:06:47.105 END TEST nvmf_connect_disconnect 00:06:47.105 ************************************ 00:06:47.105 12:59:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:47.105 12:59:08 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:47.105 12:59:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.105 12:59:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.105 12:59:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.105 ************************************ 00:06:47.105 START TEST nvmf_multitarget 00:06:47.105 ************************************ 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:47.105 * Looking for test storage... 00:06:47.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.105 12:59:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:49.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:49.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:49.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:49.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.014 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:06:49.275 00:06:49.275 --- 10.0.0.2 ping statistics --- 00:06:49.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.275 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:06:49.275 00:06:49.275 --- 10.0.0.1 ping statistics --- 00:06:49.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.275 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3740154 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3740154 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3740154 ']' 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.275 12:59:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:49.275 [2024-07-15 12:59:10.915679] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:06:49.275 [2024-07-15 12:59:10.915774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.536 [2024-07-15 12:59:10.996441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.536 [2024-07-15 12:59:11.122285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.536 [2024-07-15 12:59:11.122339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.536 [2024-07-15 12:59:11.122356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.536 [2024-07-15 12:59:11.122370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.536 [2024-07-15 12:59:11.122381] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.536 [2024-07-15 12:59:11.123902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.536 [2024-07-15 12:59:11.123933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.536 [2024-07-15 12:59:11.123986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.536 [2024-07-15 12:59:11.123989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:49.795 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:50.054 "nvmf_tgt_1" 00:06:50.054 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:50.054 "nvmf_tgt_2" 00:06:50.054 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:50.054 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:50.054 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:50.054 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:50.312 true 00:06:50.312 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:50.312 true 00:06:50.312 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:50.312 12:59:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:50.571 rmmod nvme_tcp 00:06:50.571 rmmod nvme_fabrics 00:06:50.571 rmmod nvme_keyring 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3740154 ']' 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3740154 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3740154 ']' 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3740154 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3740154 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3740154' 00:06:50.571 killing process with pid 3740154 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3740154 00:06:50.571 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3740154 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.830 12:59:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.376 12:59:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:53.376 00:06:53.376 real 0m5.822s 00:06:53.376 user 0m6.652s 00:06:53.376 sys 0m1.926s 00:06:53.377 12:59:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.377 12:59:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:53.377 ************************************ 00:06:53.377 END TEST nvmf_multitarget 00:06:53.377 ************************************ 00:06:53.377 12:59:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:53.377 12:59:14 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:53.377 12:59:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.377 12:59:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.377 12:59:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.377 ************************************ 00:06:53.377 START TEST nvmf_rpc 00:06:53.377 ************************************ 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:53.377 * Looking for test storage... 00:06:53.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.377 12:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:55.280 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:55.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:55.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:55.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.280 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:55.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:06:55.281 00:06:55.281 --- 10.0.0.2 ping statistics --- 00:06:55.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.281 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:06:55.281 00:06:55.281 --- 10.0.0.1 ping statistics --- 00:06:55.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.281 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3742255 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3742255 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3742255 ']' 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.281 12:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.281 [2024-07-15 12:59:16.796731] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:06:55.281 [2024-07-15 12:59:16.796812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.281 [2024-07-15 12:59:16.865064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.538 [2024-07-15 12:59:16.987185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.538 [2024-07-15 12:59:16.987245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.538 [2024-07-15 12:59:16.987261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.538 [2024-07-15 12:59:16.987274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.538 [2024-07-15 12:59:16.987285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.538 [2024-07-15 12:59:16.987373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.538 [2024-07-15 12:59:16.987448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.538 [2024-07-15 12:59:16.987412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.538 [2024-07-15 12:59:16.987451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:56.103 "tick_rate": 2700000000, 00:06:56.103 "poll_groups": [ 00:06:56.103 { 00:06:56.103 "name": "nvmf_tgt_poll_group_000", 00:06:56.103 "admin_qpairs": 0, 00:06:56.103 "io_qpairs": 0, 00:06:56.103 "current_admin_qpairs": 0, 00:06:56.103 "current_io_qpairs": 0, 00:06:56.103 "pending_bdev_io": 0, 00:06:56.103 "completed_nvme_io": 0, 00:06:56.103 "transports": [] 00:06:56.103 }, 00:06:56.103 { 00:06:56.103 "name": "nvmf_tgt_poll_group_001", 00:06:56.103 "admin_qpairs": 0, 00:06:56.103 "io_qpairs": 0, 00:06:56.103 "current_admin_qpairs": 0, 00:06:56.103 "current_io_qpairs": 0, 00:06:56.103 "pending_bdev_io": 0, 00:06:56.103 "completed_nvme_io": 0, 00:06:56.103 "transports": [] 00:06:56.103 }, 00:06:56.103 { 00:06:56.103 "name": "nvmf_tgt_poll_group_002", 00:06:56.103 "admin_qpairs": 0, 00:06:56.103 "io_qpairs": 0, 00:06:56.103 "current_admin_qpairs": 0, 00:06:56.103 "current_io_qpairs": 0, 00:06:56.103 "pending_bdev_io": 0, 00:06:56.103 "completed_nvme_io": 0, 00:06:56.103 "transports": [] 00:06:56.103 }, 00:06:56.103 { 00:06:56.103 "name": "nvmf_tgt_poll_group_003", 00:06:56.103 "admin_qpairs": 0, 00:06:56.103 "io_qpairs": 0, 00:06:56.103 "current_admin_qpairs": 0, 00:06:56.103 "current_io_qpairs": 0, 00:06:56.103 "pending_bdev_io": 0, 00:06:56.103 "completed_nvme_io": 0, 00:06:56.103 "transports": [] 00:06:56.103 } 00:06:56.103 ] 00:06:56.103 }' 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:56.103 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 [2024-07-15 12:59:17.867260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:56.360 "tick_rate": 2700000000, 00:06:56.360 "poll_groups": [ 00:06:56.360 { 00:06:56.360 "name": "nvmf_tgt_poll_group_000", 00:06:56.360 "admin_qpairs": 0, 00:06:56.360 "io_qpairs": 0, 00:06:56.360 "current_admin_qpairs": 0, 00:06:56.360 "current_io_qpairs": 0, 00:06:56.360 "pending_bdev_io": 0, 00:06:56.360 "completed_nvme_io": 0, 00:06:56.360 "transports": [ 00:06:56.360 { 00:06:56.360 "trtype": "TCP" 00:06:56.360 } 00:06:56.360 ] 00:06:56.360 }, 00:06:56.360 { 00:06:56.360 "name": "nvmf_tgt_poll_group_001", 00:06:56.360 "admin_qpairs": 0, 00:06:56.360 "io_qpairs": 0, 00:06:56.360 "current_admin_qpairs": 0, 00:06:56.360 "current_io_qpairs": 0, 00:06:56.360 "pending_bdev_io": 0, 00:06:56.360 "completed_nvme_io": 0, 00:06:56.360 "transports": [ 00:06:56.360 { 00:06:56.360 "trtype": "TCP" 00:06:56.360 } 00:06:56.360 ] 00:06:56.360 }, 00:06:56.360 { 00:06:56.360 "name": "nvmf_tgt_poll_group_002", 00:06:56.360 "admin_qpairs": 0, 00:06:56.360 "io_qpairs": 0, 00:06:56.360 "current_admin_qpairs": 0, 00:06:56.360 "current_io_qpairs": 0, 00:06:56.360 "pending_bdev_io": 0, 00:06:56.360 "completed_nvme_io": 0, 00:06:56.360 "transports": [ 00:06:56.360 { 00:06:56.360 "trtype": "TCP" 00:06:56.360 } 00:06:56.360 ] 00:06:56.360 }, 00:06:56.360 { 00:06:56.360 "name": "nvmf_tgt_poll_group_003", 00:06:56.360 "admin_qpairs": 0, 00:06:56.360 "io_qpairs": 0, 00:06:56.360 "current_admin_qpairs": 0, 00:06:56.360 "current_io_qpairs": 0, 00:06:56.360 "pending_bdev_io": 0, 00:06:56.360 "completed_nvme_io": 0, 00:06:56.360 "transports": [ 00:06:56.360 { 00:06:56.360 "trtype": "TCP" 00:06:56.360 } 00:06:56.360 ] 00:06:56.360 } 00:06:56.360 ] 00:06:56.360 }' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 Malloc1 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.360 12:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 [2024-07-15 12:59:18.016562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.360 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:56.361 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.361 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:56.361 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:56.361 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:56.361 [2024-07-15 12:59:18.039120] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:56.620 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:56.620 could not add new controller: failed to write to nvme-fabrics device 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.620 12:59:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.188 12:59:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:57.188 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:57.188 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:57.188 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:57.188 12:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:59.091 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:59.091 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:59.091 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:59.091 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:59.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:59.092 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.092 [2024-07-15 12:59:20.782203] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:59.351 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:59.351 could not add new controller: failed to write to nvme-fabrics device 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.351 12:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.916 12:59:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.916 12:59:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:59.916 12:59:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.916 12:59:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:59.916 12:59:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:01.820 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:01.820 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:01.820 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:01.820 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:01.820 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:01.820 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:01.820 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:02.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.079 [2024-07-15 12:59:23.573905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.079 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.080 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:02.080 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.080 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.080 12:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.080 12:59:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:02.646 12:59:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:02.646 12:59:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:02.646 12:59:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:02.646 12:59:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:02.646 12:59:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 [2024-07-15 12:59:26.429253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.180 12:59:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.437 12:59:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.437 12:59:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:05.437 12:59:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.437 12:59:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:05.437 12:59:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.970 [2024-07-15 12:59:29.254256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.970 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.541 12:59:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.541 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:08.542 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.542 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:08.542 12:59:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:10.450 12:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:10.450 12:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:10.450 12:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.450 12:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:10.450 12:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.450 12:59:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:10.450 12:59:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 [2024-07-15 12:59:32.126607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.450 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.386 12:59:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.386 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:11.386 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.386 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:11.386 12:59:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:13.293 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:13.293 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.294 [2024-07-15 12:59:34.947053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.294 12:59:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.233 12:59:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:14.233 12:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:14.233 12:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:14.233 12:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:14.233 12:59:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 [2024-07-15 12:59:37.696362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.135 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 [2024-07-15 12:59:37.744418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 [2024-07-15 12:59:37.792562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.136 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 [2024-07-15 12:59:37.840718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 [2024-07-15 12:59:37.888919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:16.395 "tick_rate": 2700000000, 00:07:16.395 "poll_groups": [ 00:07:16.395 { 00:07:16.395 "name": "nvmf_tgt_poll_group_000", 00:07:16.395 "admin_qpairs": 2, 00:07:16.395 "io_qpairs": 84, 00:07:16.395 "current_admin_qpairs": 0, 00:07:16.395 "current_io_qpairs": 0, 00:07:16.395 "pending_bdev_io": 0, 00:07:16.395 "completed_nvme_io": 184, 00:07:16.395 "transports": [ 00:07:16.395 { 00:07:16.395 "trtype": "TCP" 00:07:16.395 } 00:07:16.395 ] 00:07:16.395 }, 00:07:16.395 { 00:07:16.395 "name": "nvmf_tgt_poll_group_001", 00:07:16.395 "admin_qpairs": 2, 00:07:16.395 "io_qpairs": 84, 00:07:16.395 "current_admin_qpairs": 0, 00:07:16.395 "current_io_qpairs": 0, 00:07:16.395 "pending_bdev_io": 0, 00:07:16.395 "completed_nvme_io": 134, 00:07:16.395 "transports": [ 00:07:16.395 { 00:07:16.395 "trtype": "TCP" 00:07:16.395 } 00:07:16.395 ] 00:07:16.395 }, 00:07:16.395 { 00:07:16.395 "name": "nvmf_tgt_poll_group_002", 00:07:16.395 "admin_qpairs": 1, 00:07:16.395 "io_qpairs": 84, 00:07:16.395 "current_admin_qpairs": 0, 00:07:16.395 "current_io_qpairs": 0, 00:07:16.395 "pending_bdev_io": 0, 00:07:16.395 "completed_nvme_io": 183, 00:07:16.395 "transports": [ 00:07:16.395 { 00:07:16.395 "trtype": "TCP" 00:07:16.395 } 00:07:16.395 ] 00:07:16.395 }, 00:07:16.395 { 00:07:16.395 "name": "nvmf_tgt_poll_group_003", 00:07:16.395 "admin_qpairs": 2, 00:07:16.395 "io_qpairs": 84, 00:07:16.395 "current_admin_qpairs": 0, 00:07:16.395 "current_io_qpairs": 0, 00:07:16.395 "pending_bdev_io": 0, 00:07:16.395 "completed_nvme_io": 185, 00:07:16.395 "transports": [ 00:07:16.395 { 00:07:16.395 "trtype": "TCP" 00:07:16.395 } 00:07:16.395 ] 00:07:16.395 } 00:07:16.395 ] 00:07:16.395 }' 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:16.395 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:16.396 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:16.396 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:16.396 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:16.396 12:59:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:16.396 rmmod nvme_tcp 00:07:16.396 rmmod nvme_fabrics 00:07:16.396 rmmod nvme_keyring 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3742255 ']' 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3742255 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3742255 ']' 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3742255 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.396 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3742255 00:07:16.655 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.655 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.655 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3742255' 00:07:16.655 killing process with pid 3742255 00:07:16.655 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3742255 00:07:16.655 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3742255 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.914 12:59:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.818 12:59:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:18.818 00:07:18.818 real 0m25.940s 00:07:18.818 user 1m24.892s 00:07:18.818 sys 0m4.167s 00:07:18.818 12:59:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.818 12:59:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.818 ************************************ 00:07:18.818 END TEST nvmf_rpc 00:07:18.818 ************************************ 00:07:18.818 12:59:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:18.818 12:59:40 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:18.818 12:59:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.818 12:59:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.818 12:59:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.077 ************************************ 00:07:19.077 START TEST nvmf_invalid 00:07:19.077 ************************************ 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:19.077 * Looking for test storage... 00:07:19.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:19.077 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.078 12:59:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:20.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:20.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.985 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:20.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:20.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:07:20.986 00:07:20.986 --- 10.0.0.2 ping statistics --- 00:07:20.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.986 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:07:20.986 00:07:20.986 --- 10.0.0.1 ping statistics --- 00:07:20.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.986 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.986 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3746878 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3746878 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3746878 ']' 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.246 12:59:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:21.246 [2024-07-15 12:59:42.741098] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:07:21.246 [2024-07-15 12:59:42.741197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.246 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.246 [2024-07-15 12:59:42.818314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.246 [2024-07-15 12:59:42.943797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.246 [2024-07-15 12:59:42.943860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.246 [2024-07-15 12:59:42.943885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.246 [2024-07-15 12:59:42.943900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.246 [2024-07-15 12:59:42.943912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.505 [2024-07-15 12:59:42.946906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.505 [2024-07-15 12:59:42.946940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.505 [2024-07-15 12:59:42.946993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.505 [2024-07-15 12:59:42.946997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:21.505 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24465 00:07:21.763 [2024-07-15 12:59:43.377570] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:21.763 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:21.763 { 00:07:21.763 "nqn": "nqn.2016-06.io.spdk:cnode24465", 00:07:21.763 "tgt_name": "foobar", 00:07:21.763 "method": "nvmf_create_subsystem", 00:07:21.763 "req_id": 1 00:07:21.763 } 00:07:21.763 Got JSON-RPC error response 00:07:21.763 response: 00:07:21.763 { 00:07:21.763 "code": -32603, 00:07:21.763 "message": "Unable to find target foobar" 00:07:21.763 }' 00:07:21.763 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:21.763 { 00:07:21.763 "nqn": "nqn.2016-06.io.spdk:cnode24465", 00:07:21.763 "tgt_name": "foobar", 00:07:21.763 "method": "nvmf_create_subsystem", 00:07:21.763 "req_id": 1 00:07:21.763 } 00:07:21.763 Got JSON-RPC error response 00:07:21.763 response: 00:07:21.763 { 00:07:21.763 "code": -32603, 00:07:21.763 "message": "Unable to find target foobar" 00:07:21.763 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:21.763 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:21.763 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22398 00:07:22.020 [2024-07-15 12:59:43.658492] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22398: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:22.020 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:22.020 { 00:07:22.020 "nqn": "nqn.2016-06.io.spdk:cnode22398", 00:07:22.020 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:22.020 "method": "nvmf_create_subsystem", 00:07:22.020 "req_id": 1 00:07:22.020 } 00:07:22.020 Got JSON-RPC error response 00:07:22.020 response: 00:07:22.020 { 00:07:22.020 "code": -32602, 00:07:22.020 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:22.020 }' 00:07:22.020 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:22.020 { 00:07:22.020 "nqn": "nqn.2016-06.io.spdk:cnode22398", 00:07:22.020 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:22.020 "method": "nvmf_create_subsystem", 00:07:22.020 "req_id": 1 00:07:22.020 } 00:07:22.020 Got JSON-RPC error response 00:07:22.020 response: 00:07:22.020 { 00:07:22.020 "code": -32602, 00:07:22.020 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:22.020 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:22.020 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:22.020 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27480 00:07:22.279 [2024-07-15 12:59:43.907288] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27480: invalid model number 'SPDK_Controller' 00:07:22.279 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:22.279 { 00:07:22.279 "nqn": "nqn.2016-06.io.spdk:cnode27480", 00:07:22.279 "model_number": "SPDK_Controller\u001f", 00:07:22.279 "method": "nvmf_create_subsystem", 00:07:22.279 "req_id": 1 00:07:22.279 } 00:07:22.279 Got JSON-RPC error response 00:07:22.279 response: 00:07:22.279 { 00:07:22.279 "code": -32602, 00:07:22.279 "message": "Invalid MN SPDK_Controller\u001f" 00:07:22.279 }' 00:07:22.279 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:22.279 { 00:07:22.279 "nqn": "nqn.2016-06.io.spdk:cnode27480", 00:07:22.279 "model_number": "SPDK_Controller\u001f", 00:07:22.279 "method": "nvmf_create_subsystem", 00:07:22.280 "req_id": 1 00:07:22.280 } 00:07:22.280 Got JSON-RPC error response 00:07:22.280 response: 00:07:22.280 { 00:07:22.280 "code": -32602, 00:07:22.280 "message": "Invalid MN SPDK_Controller\u001f" 00:07:22.280 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:22.280 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:22.538 12:59:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '3oUT3v$bQgh`^3\B/KjT' 00:07:22.538 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '3oUT3v$bQgh`^3\B/KjT' nqn.2016-06.io.spdk:cnode1504 00:07:22.538 [2024-07-15 12:59:44.232415] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1504: invalid serial number '3oUT3v$bQgh`^3\B/KjT' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:22.799 { 00:07:22.799 "nqn": "nqn.2016-06.io.spdk:cnode1504", 00:07:22.799 "serial_number": "3oUT3v$bQ\u007fgh`^3\\B/KjT", 00:07:22.799 "method": "nvmf_create_subsystem", 00:07:22.799 "req_id": 1 00:07:22.799 } 00:07:22.799 Got JSON-RPC error response 00:07:22.799 response: 00:07:22.799 { 00:07:22.799 "code": -32602, 00:07:22.799 "message": "Invalid SN 3oUT3v$bQ\u007fgh`^3\\B/KjT" 00:07:22.799 }' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:22.799 { 00:07:22.799 "nqn": "nqn.2016-06.io.spdk:cnode1504", 00:07:22.799 "serial_number": "3oUT3v$bQ\u007fgh`^3\\B/KjT", 00:07:22.799 "method": "nvmf_create_subsystem", 00:07:22.799 "req_id": 1 00:07:22.799 } 00:07:22.799 Got JSON-RPC error response 00:07:22.799 response: 00:07:22.799 { 00:07:22.799 "code": -32602, 00:07:22.799 "message": "Invalid SN 3oUT3v$bQ\u007fgh`^3\\B/KjT" 00:07:22.799 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:22.799 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:22.800 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bR1!U3A=&h2xGex1=5E|5@3Npu5RA'\''!Wa`wBMs7m\' 00:07:22.801 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'bR1!U3A=&h2xGex1=5E|5@3Npu5RA'\''!Wa`wBMs7m\' nqn.2016-06.io.spdk:cnode31309 00:07:23.059 [2024-07-15 12:59:44.645718] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31309: invalid model number 'bR1!U3A=&h2xGex1=5E|5@3Npu5RA'!Wa`wBMs7m\' 00:07:23.059 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:23.059 { 00:07:23.059 "nqn": "nqn.2016-06.io.spdk:cnode31309", 00:07:23.059 "model_number": "bR1!U3A=&h2xGex1=5E|5@3Npu5RA'\''!Wa`wBMs7m\\", 00:07:23.059 "method": "nvmf_create_subsystem", 00:07:23.059 "req_id": 1 00:07:23.059 } 00:07:23.059 Got JSON-RPC error response 00:07:23.059 response: 00:07:23.059 { 00:07:23.059 "code": -32602, 00:07:23.059 "message": "Invalid MN bR1!U3A=&h2xGex1=5E|5@3Npu5RA'\''!Wa`wBMs7m\\" 00:07:23.059 }' 00:07:23.059 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:23.059 { 00:07:23.059 "nqn": "nqn.2016-06.io.spdk:cnode31309", 00:07:23.059 "model_number": "bR1!U3A=&h2xGex1=5E|5@3Npu5RA'!Wa`wBMs7m\\", 00:07:23.059 "method": "nvmf_create_subsystem", 00:07:23.059 "req_id": 1 00:07:23.059 } 00:07:23.059 Got JSON-RPC error response 00:07:23.059 response: 00:07:23.059 { 00:07:23.059 "code": -32602, 00:07:23.059 "message": "Invalid MN bR1!U3A=&h2xGex1=5E|5@3Npu5RA'!Wa`wBMs7m\\" 00:07:23.059 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:23.059 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:23.328 [2024-07-15 12:59:44.902652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.328 12:59:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:23.611 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:23.611 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:23.611 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:23.611 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:23.611 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:23.869 [2024-07-15 12:59:45.404296] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:23.869 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:23.869 { 00:07:23.869 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:23.869 "listen_address": { 00:07:23.869 "trtype": "tcp", 00:07:23.869 "traddr": "", 00:07:23.869 "trsvcid": "4421" 00:07:23.869 }, 00:07:23.869 "method": "nvmf_subsystem_remove_listener", 00:07:23.869 "req_id": 1 00:07:23.869 } 00:07:23.869 Got JSON-RPC error response 00:07:23.869 response: 00:07:23.869 { 00:07:23.869 "code": -32602, 00:07:23.869 "message": "Invalid parameters" 00:07:23.869 }' 00:07:23.869 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:23.869 { 00:07:23.869 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:23.869 "listen_address": { 00:07:23.869 "trtype": "tcp", 00:07:23.869 "traddr": "", 00:07:23.869 "trsvcid": "4421" 00:07:23.869 }, 00:07:23.869 "method": "nvmf_subsystem_remove_listener", 00:07:23.869 "req_id": 1 00:07:23.869 } 00:07:23.869 Got JSON-RPC error response 00:07:23.869 response: 00:07:23.869 { 00:07:23.869 "code": -32602, 00:07:23.869 "message": "Invalid parameters" 00:07:23.869 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:23.869 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25973 -i 0 00:07:24.127 [2024-07-15 12:59:45.669094] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25973: invalid cntlid range [0-65519] 00:07:24.127 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:24.127 { 00:07:24.127 "nqn": "nqn.2016-06.io.spdk:cnode25973", 00:07:24.127 "min_cntlid": 0, 00:07:24.127 "method": "nvmf_create_subsystem", 00:07:24.127 "req_id": 1 00:07:24.127 } 00:07:24.128 Got JSON-RPC error response 00:07:24.128 response: 00:07:24.128 { 00:07:24.128 "code": -32602, 00:07:24.128 "message": "Invalid cntlid range [0-65519]" 00:07:24.128 }' 00:07:24.128 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:24.128 { 00:07:24.128 "nqn": "nqn.2016-06.io.spdk:cnode25973", 00:07:24.128 "min_cntlid": 0, 00:07:24.128 "method": "nvmf_create_subsystem", 00:07:24.128 "req_id": 1 00:07:24.128 } 00:07:24.128 Got JSON-RPC error response 00:07:24.128 response: 00:07:24.128 { 00:07:24.128 "code": -32602, 00:07:24.128 "message": "Invalid cntlid range [0-65519]" 00:07:24.128 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.128 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18400 -i 65520 00:07:24.386 [2024-07-15 12:59:45.921951] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18400: invalid cntlid range [65520-65519] 00:07:24.386 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:24.386 { 00:07:24.386 "nqn": "nqn.2016-06.io.spdk:cnode18400", 00:07:24.386 "min_cntlid": 65520, 00:07:24.386 "method": "nvmf_create_subsystem", 00:07:24.386 "req_id": 1 00:07:24.387 } 00:07:24.387 Got JSON-RPC error response 00:07:24.387 response: 00:07:24.387 { 00:07:24.387 "code": -32602, 00:07:24.387 "message": "Invalid cntlid range [65520-65519]" 00:07:24.387 }' 00:07:24.387 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:24.387 { 00:07:24.387 "nqn": "nqn.2016-06.io.spdk:cnode18400", 00:07:24.387 "min_cntlid": 65520, 00:07:24.387 "method": "nvmf_create_subsystem", 00:07:24.387 "req_id": 1 00:07:24.387 } 00:07:24.387 Got JSON-RPC error response 00:07:24.387 response: 00:07:24.387 { 00:07:24.387 "code": -32602, 00:07:24.387 "message": "Invalid cntlid range [65520-65519]" 00:07:24.387 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.387 12:59:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13268 -I 0 00:07:24.645 [2024-07-15 12:59:46.182892] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13268: invalid cntlid range [1-0] 00:07:24.645 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:24.645 { 00:07:24.645 "nqn": "nqn.2016-06.io.spdk:cnode13268", 00:07:24.645 "max_cntlid": 0, 00:07:24.645 "method": "nvmf_create_subsystem", 00:07:24.645 "req_id": 1 00:07:24.645 } 00:07:24.645 Got JSON-RPC error response 00:07:24.645 response: 00:07:24.645 { 00:07:24.645 "code": -32602, 00:07:24.645 "message": "Invalid cntlid range [1-0]" 00:07:24.645 }' 00:07:24.645 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:24.645 { 00:07:24.645 "nqn": "nqn.2016-06.io.spdk:cnode13268", 00:07:24.645 "max_cntlid": 0, 00:07:24.645 "method": "nvmf_create_subsystem", 00:07:24.645 "req_id": 1 00:07:24.645 } 00:07:24.645 Got JSON-RPC error response 00:07:24.645 response: 00:07:24.645 { 00:07:24.645 "code": -32602, 00:07:24.645 "message": "Invalid cntlid range [1-0]" 00:07:24.645 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.645 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31064 -I 65520 00:07:24.903 [2024-07-15 12:59:46.431679] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31064: invalid cntlid range [1-65520] 00:07:24.903 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:24.903 { 00:07:24.903 "nqn": "nqn.2016-06.io.spdk:cnode31064", 00:07:24.903 "max_cntlid": 65520, 00:07:24.903 "method": "nvmf_create_subsystem", 00:07:24.903 "req_id": 1 00:07:24.903 } 00:07:24.903 Got JSON-RPC error response 00:07:24.903 response: 00:07:24.903 { 00:07:24.903 "code": -32602, 00:07:24.903 "message": "Invalid cntlid range [1-65520]" 00:07:24.903 }' 00:07:24.903 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:24.903 { 00:07:24.903 "nqn": "nqn.2016-06.io.spdk:cnode31064", 00:07:24.903 "max_cntlid": 65520, 00:07:24.903 "method": "nvmf_create_subsystem", 00:07:24.903 "req_id": 1 00:07:24.903 } 00:07:24.903 Got JSON-RPC error response 00:07:24.903 response: 00:07:24.903 { 00:07:24.903 "code": -32602, 00:07:24.903 "message": "Invalid cntlid range [1-65520]" 00:07:24.903 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:24.903 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1869 -i 6 -I 5 00:07:25.162 [2024-07-15 12:59:46.688582] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1869: invalid cntlid range [6-5] 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:25.162 { 00:07:25.162 "nqn": "nqn.2016-06.io.spdk:cnode1869", 00:07:25.162 "min_cntlid": 6, 00:07:25.162 "max_cntlid": 5, 00:07:25.162 "method": "nvmf_create_subsystem", 00:07:25.162 "req_id": 1 00:07:25.162 } 00:07:25.162 Got JSON-RPC error response 00:07:25.162 response: 00:07:25.162 { 00:07:25.162 "code": -32602, 00:07:25.162 "message": "Invalid cntlid range [6-5]" 00:07:25.162 }' 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:25.162 { 00:07:25.162 "nqn": "nqn.2016-06.io.spdk:cnode1869", 00:07:25.162 "min_cntlid": 6, 00:07:25.162 "max_cntlid": 5, 00:07:25.162 "method": "nvmf_create_subsystem", 00:07:25.162 "req_id": 1 00:07:25.162 } 00:07:25.162 Got JSON-RPC error response 00:07:25.162 response: 00:07:25.162 { 00:07:25.162 "code": -32602, 00:07:25.162 "message": "Invalid cntlid range [6-5]" 00:07:25.162 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:25.162 { 00:07:25.162 "name": "foobar", 00:07:25.162 "method": "nvmf_delete_target", 00:07:25.162 "req_id": 1 00:07:25.162 } 00:07:25.162 Got JSON-RPC error response 00:07:25.162 response: 00:07:25.162 { 00:07:25.162 "code": -32602, 00:07:25.162 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:25.162 }' 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:25.162 { 00:07:25.162 "name": "foobar", 00:07:25.162 "method": "nvmf_delete_target", 00:07:25.162 "req_id": 1 00:07:25.162 } 00:07:25.162 Got JSON-RPC error response 00:07:25.162 response: 00:07:25.162 { 00:07:25.162 "code": -32602, 00:07:25.162 "message": "The specified target doesn't exist, cannot delete it." 00:07:25.162 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.162 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:25.163 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.163 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:25.163 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.163 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.163 rmmod nvme_tcp 00:07:25.163 rmmod nvme_fabrics 00:07:25.163 rmmod nvme_keyring 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3746878 ']' 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3746878 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3746878 ']' 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3746878 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746878 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746878' 00:07:25.422 killing process with pid 3746878 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3746878 00:07:25.422 12:59:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3746878 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.681 12:59:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.614 12:59:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:27.614 00:07:27.615 real 0m8.693s 00:07:27.615 user 0m20.506s 00:07:27.615 sys 0m2.401s 00:07:27.615 12:59:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.615 12:59:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:27.615 ************************************ 00:07:27.615 END TEST nvmf_invalid 00:07:27.615 ************************************ 00:07:27.615 12:59:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:27.615 12:59:49 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.615 12:59:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.615 12:59:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.615 12:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.615 ************************************ 00:07:27.615 START TEST nvmf_abort 00:07:27.615 ************************************ 00:07:27.615 12:59:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.874 * Looking for test storage... 00:07:27.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:27.874 12:59:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:29.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:29.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:29.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:29.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.777 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:30.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:30.036 00:07:30.036 --- 10.0.0.2 ping statistics --- 00:07:30.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.036 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:07:30.036 00:07:30.036 --- 10.0.0.1 ping statistics --- 00:07:30.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.036 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3749514 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3749514 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3749514 ']' 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.036 12:59:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.036 [2024-07-15 12:59:51.588711] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:07:30.036 [2024-07-15 12:59:51.588811] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.036 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.036 [2024-07-15 12:59:51.658272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.296 [2024-07-15 12:59:51.778134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.296 [2024-07-15 12:59:51.778216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.296 [2024-07-15 12:59:51.778241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.296 [2024-07-15 12:59:51.778254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.296 [2024-07-15 12:59:51.778266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.296 [2024-07-15 12:59:51.778361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.296 [2024-07-15 12:59:51.778459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.296 [2024-07-15 12:59:51.778461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.863 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.863 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:30.863 12:59:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.863 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.863 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.122 [2024-07-15 12:59:52.579705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.122 Malloc0 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.122 Delay0 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.122 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.123 [2024-07-15 12:59:52.644784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.123 12:59:52 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:31.123 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.123 [2024-07-15 12:59:52.792030] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:33.663 Initializing NVMe Controllers 00:07:33.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:33.663 controller IO queue size 128 less than required 00:07:33.663 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:33.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:33.663 Initialization complete. Launching workers. 00:07:33.663 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33210 00:07:33.663 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33271, failed to submit 62 00:07:33.663 success 33214, unsuccess 57, failed 0 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.663 rmmod nvme_tcp 00:07:33.663 rmmod nvme_fabrics 00:07:33.663 rmmod nvme_keyring 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3749514 ']' 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3749514 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3749514 ']' 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3749514 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3749514 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3749514' 00:07:33.663 killing process with pid 3749514 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3749514 00:07:33.663 12:59:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3749514 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.663 12:59:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.573 12:59:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.573 00:07:35.573 real 0m7.980s 00:07:35.573 user 0m12.712s 00:07:35.573 sys 0m2.520s 00:07:35.573 12:59:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.573 12:59:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.573 ************************************ 00:07:35.573 END TEST nvmf_abort 00:07:35.573 ************************************ 00:07:35.832 12:59:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.832 12:59:57 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.832 12:59:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.832 12:59:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.832 12:59:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.832 ************************************ 00:07:35.832 START TEST nvmf_ns_hotplug_stress 00:07:35.832 ************************************ 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.832 * Looking for test storage... 00:07:35.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.832 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.833 12:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.738 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:07:37.997 00:07:37.997 --- 10.0.0.2 ping statistics --- 00:07:37.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.997 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:07:37.997 00:07:37.997 --- 10.0.0.1 ping statistics --- 00:07:37.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.997 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3751863 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3751863 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3751863 ']' 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.997 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.997 [2024-07-15 12:59:59.556372] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:07:37.997 [2024-07-15 12:59:59.556466] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.997 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.997 [2024-07-15 12:59:59.632874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.255 [2024-07-15 12:59:59.757469] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.255 [2024-07-15 12:59:59.757528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.255 [2024-07-15 12:59:59.757546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.255 [2024-07-15 12:59:59.757560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.255 [2024-07-15 12:59:59.757572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.255 [2024-07-15 12:59:59.757645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.255 [2024-07-15 12:59:59.757700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.255 [2024-07-15 12:59:59.757704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:38.255 12:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.519 [2024-07-15 13:00:00.134728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.519 13:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:38.785 13:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.042 [2024-07-15 13:00:00.677536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.042 13:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.299 13:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:39.557 Malloc0 00:07:39.557 13:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:39.815 Delay0 00:07:39.815 13:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.071 13:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:40.327 NULL1 00:07:40.327 13:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:40.598 13:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3752282 00:07:40.598 13:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:40.598 13:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:40.598 13:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.598 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.972 Read completed with error (sct=0, sc=11) 00:07:41.972 13:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.230 13:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:42.230 13:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:42.487 true 00:07:42.487 13:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:42.487 13:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.055 13:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.312 13:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:43.312 13:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:43.569 true 00:07:43.569 13:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:43.569 13:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.826 13:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.083 13:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:44.083 13:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:44.340 true 00:07:44.340 13:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:44.340 13:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.275 13:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.533 13:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:45.533 13:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:45.790 true 00:07:45.790 13:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:45.790 13:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.049 13:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.306 13:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:46.306 13:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:46.564 true 00:07:46.564 13:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:46.564 13:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.822 13:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.080 13:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:47.080 13:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:47.338 true 00:07:47.338 13:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:47.338 13:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.273 13:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.531 13:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:48.531 13:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:48.790 true 00:07:48.790 13:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:48.790 13:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.047 13:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.303 13:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:49.303 13:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:49.560 true 00:07:49.560 13:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:49.560 13:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.493 13:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.752 13:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:50.752 13:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:50.752 true 00:07:50.752 13:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:50.752 13:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.016 13:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.273 13:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:51.273 13:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:51.531 true 00:07:51.531 13:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:51.531 13:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.466 13:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.724 13:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:52.724 13:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:52.981 true 00:07:52.981 13:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:52.981 13:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.239 13:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.497 13:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:53.497 13:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:53.756 true 00:07:53.756 13:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:53.756 13:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.691 13:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.691 13:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:54.691 13:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:54.950 true 00:07:54.950 13:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:54.950 13:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.206 13:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.465 13:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:55.465 13:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:55.721 true 00:07:55.721 13:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:55.721 13:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.659 13:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.918 13:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:56.918 13:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:57.181 true 00:07:57.181 13:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:57.181 13:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.442 13:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.700 13:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:57.700 13:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:57.956 true 00:07:57.956 13:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:57.956 13:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.890 13:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.183 13:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:59.183 13:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:59.441 true 00:07:59.441 13:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:07:59.441 13:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.697 13:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.954 13:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:59.954 13:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:00.210 true 00:08:00.210 13:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:00.210 13:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.148 13:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.405 13:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:01.405 13:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:01.664 true 00:08:01.664 13:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:01.664 13:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.921 13:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.179 13:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:02.179 13:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:02.179 true 00:08:02.439 13:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:02.439 13:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.387 13:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.387 13:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:03.387 13:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:03.657 true 00:08:03.657 13:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:03.657 13:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.915 13:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.172 13:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:04.172 13:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:04.430 true 00:08:04.430 13:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:04.430 13:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.365 13:00:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.623 13:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:05.623 13:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:05.880 true 00:08:05.881 13:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:05.881 13:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.138 13:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.398 13:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:06.398 13:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:06.398 true 00:08:06.657 13:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:06.657 13:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.915 13:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.915 13:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:06.915 13:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:07.172 true 00:08:07.172 13:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:07.172 13:00:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.547 13:00:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.547 13:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:08.547 13:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:08.804 true 00:08:08.804 13:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:08.804 13:00:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.740 13:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.740 13:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:09.740 13:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:09.997 true 00:08:09.997 13:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:09.997 13:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.255 13:00:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.512 13:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:10.512 13:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:10.771 true 00:08:10.771 13:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:10.771 13:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.028 Initializing NVMe Controllers 00:08:11.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.028 Controller IO queue size 128, less than required. 00:08:11.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.028 Controller IO queue size 128, less than required. 00:08:11.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:11.028 Initialization complete. Launching workers. 00:08:11.028 ======================================================== 00:08:11.028 Latency(us) 00:08:11.028 Device Information : IOPS MiB/s Average min max 00:08:11.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1027.11 0.50 66066.37 2558.16 1012612.89 00:08:11.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11220.81 5.48 11407.05 1842.81 453078.42 00:08:11.028 ======================================================== 00:08:11.028 Total : 12247.92 5.98 15990.79 1842.81 1012612.89 00:08:11.028 00:08:11.028 13:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.286 13:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:11.286 13:00:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:11.543 true 00:08:11.543 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3752282 00:08:11.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3752282) - No such process 00:08:11.543 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3752282 00:08:11.543 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.799 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.057 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:12.057 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:12.057 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:12.057 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.057 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:12.314 null0 00:08:12.314 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.314 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.314 13:00:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:12.571 null1 00:08:12.571 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.571 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.571 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:12.828 null2 00:08:12.828 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.828 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.828 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:13.085 null3 00:08:13.085 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.085 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.085 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:13.342 null4 00:08:13.342 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.342 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.342 13:00:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:13.599 null5 00:08:13.599 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.599 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.599 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:13.858 null6 00:08:13.858 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.858 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.858 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:14.117 null7 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.117 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3756855 3756856 3756858 3756860 3756862 3756864 3756866 3756868 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.118 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.375 13:00:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.633 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.890 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.148 13:00:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.405 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.406 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.406 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.406 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.406 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.406 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.406 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.406 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.674 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.932 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.190 13:00:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.448 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.706 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.965 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.965 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.965 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.224 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.224 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.224 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.224 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.224 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.482 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.482 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.482 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.482 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.482 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.483 13:00:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.740 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.996 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.254 13:00:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.512 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.512 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.771 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.030 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.289 13:00:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.546 rmmod nvme_tcp 00:08:19.546 rmmod nvme_fabrics 00:08:19.546 rmmod nvme_keyring 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3751863 ']' 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3751863 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3751863 ']' 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3751863 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.546 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3751863 00:08:19.805 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:19.805 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:19.805 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3751863' 00:08:19.805 killing process with pid 3751863 00:08:19.805 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3751863 00:08:19.805 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3751863 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.064 13:00:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.965 13:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.965 00:08:21.965 real 0m46.270s 00:08:21.965 user 3m30.215s 00:08:21.965 sys 0m16.416s 00:08:21.965 13:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.965 13:00:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.965 ************************************ 00:08:21.965 END TEST nvmf_ns_hotplug_stress 00:08:21.965 ************************************ 00:08:21.965 13:00:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:21.965 13:00:43 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:21.965 13:00:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:21.965 13:00:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.965 13:00:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.965 ************************************ 00:08:21.965 START TEST nvmf_connect_stress 00:08:21.965 ************************************ 00:08:21.965 13:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:21.965 * Looking for test storage... 00:08:22.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.222 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.223 13:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:24.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:24.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:24.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.119 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:24.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:08:24.120 00:08:24.120 --- 10.0.0.2 ping statistics --- 00:08:24.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.120 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:24.120 00:08:24.120 --- 10.0.0.1 ping statistics --- 00:08:24.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.120 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.120 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3759615 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3759615 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3759615 ']' 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.378 13:00:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.378 [2024-07-15 13:00:45.884274] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:08:24.378 [2024-07-15 13:00:45.884345] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.378 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.378 [2024-07-15 13:00:45.952846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.378 [2024-07-15 13:00:46.069314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.378 [2024-07-15 13:00:46.069364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.378 [2024-07-15 13:00:46.069388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.378 [2024-07-15 13:00:46.069401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.378 [2024-07-15 13:00:46.069413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.378 [2024-07-15 13:00:46.069494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.378 [2024-07-15 13:00:46.069608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.378 [2024-07-15 13:00:46.069611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.311 [2024-07-15 13:00:46.839555] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.311 [2024-07-15 13:00:46.873012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.311 NULL1 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3759768 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.311 13:00:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.568 13:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:25.568 13:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.568 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.568 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.132 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.132 13:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:26.132 13:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.132 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.132 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.389 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.389 13:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:26.389 13:00:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.389 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.389 13:00:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.646 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.646 13:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:26.646 13:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.646 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.646 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.904 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.904 13:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:26.904 13:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.904 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.904 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.161 13:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:27.161 13:00:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.161 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.161 13:00:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.725 13:00:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:27.725 13:00:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.725 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.725 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.982 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.982 13:00:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:27.982 13:00:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.982 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.982 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.238 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.238 13:00:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:28.239 13:00:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.239 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.239 13:00:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.499 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.499 13:00:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:28.499 13:00:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.499 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.499 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.065 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.065 13:00:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:29.065 13:00:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.065 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.065 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.322 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.322 13:00:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:29.322 13:00:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.322 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.322 13:00:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.579 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.579 13:00:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:29.579 13:00:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.579 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.579 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.836 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.836 13:00:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:29.836 13:00:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.836 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.836 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.093 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.093 13:00:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:30.093 13:00:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.093 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.093 13:00:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.657 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.657 13:00:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:30.657 13:00:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.657 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.657 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.914 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.914 13:00:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:30.914 13:00:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.914 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.914 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.172 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.172 13:00:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:31.172 13:00:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.172 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.172 13:00:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.430 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.430 13:00:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:31.430 13:00:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.430 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.430 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.686 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.686 13:00:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:31.686 13:00:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.686 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.686 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.256 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.256 13:00:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:32.256 13:00:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.256 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.256 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.513 13:00:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.513 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:32.513 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.513 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.513 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.771 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.771 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:32.771 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.771 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.771 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.028 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.028 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:33.028 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.028 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.028 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.286 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.286 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:33.286 13:00:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.286 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.286 13:00:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.851 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.851 13:00:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:33.851 13:00:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.851 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.851 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.109 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.109 13:00:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:34.109 13:00:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.109 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.109 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.367 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.367 13:00:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:34.367 13:00:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.367 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.367 13:00:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.625 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.625 13:00:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:34.625 13:00:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.625 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.625 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.884 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.884 13:00:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:34.884 13:00:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.884 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.884 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.449 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.449 13:00:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:35.449 13:00:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.449 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.449 13:00:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.449 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3759768 00:08:35.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3759768) - No such process 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3759768 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.707 rmmod nvme_tcp 00:08:35.707 rmmod nvme_fabrics 00:08:35.707 rmmod nvme_keyring 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3759615 ']' 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3759615 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3759615 ']' 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3759615 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3759615 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3759615' 00:08:35.707 killing process with pid 3759615 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3759615 00:08:35.707 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3759615 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.965 13:00:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.499 13:00:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.499 00:08:38.499 real 0m15.989s 00:08:38.499 user 0m40.421s 00:08:38.499 sys 0m5.994s 00:08:38.499 13:00:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.499 13:00:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.499 ************************************ 00:08:38.499 END TEST nvmf_connect_stress 00:08:38.499 ************************************ 00:08:38.499 13:00:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:38.499 13:00:59 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:38.499 13:00:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.499 13:00:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.499 13:00:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.499 ************************************ 00:08:38.499 START TEST nvmf_fused_ordering 00:08:38.499 ************************************ 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:38.499 * Looking for test storage... 00:08:38.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.499 13:00:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.904 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.905 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:08:40.164 00:08:40.164 --- 10.0.0.2 ping statistics --- 00:08:40.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.164 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:40.164 00:08:40.164 --- 10.0.0.1 ping statistics --- 00:08:40.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.164 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3762922 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3762922 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3762922 ']' 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.164 13:01:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.164 [2024-07-15 13:01:01.783204] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:08:40.164 [2024-07-15 13:01:01.783287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.164 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.164 [2024-07-15 13:01:01.852592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.421 [2024-07-15 13:01:01.974580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.422 [2024-07-15 13:01:01.974637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.422 [2024-07-15 13:01:01.974655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.422 [2024-07-15 13:01:01.974668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.422 [2024-07-15 13:01:01.974680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.422 [2024-07-15 13:01:01.974710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.422 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.678 [2024-07-15 13:01:02.122691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.678 [2024-07-15 13:01:02.138900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.678 NULL1 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.678 13:01:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:40.678 [2024-07-15 13:01:02.184659] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:08:40.678 [2024-07-15 13:01:02.184703] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763064 ] 00:08:40.678 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.241 Attached to nqn.2016-06.io.spdk:cnode1 00:08:41.241 Namespace ID: 1 size: 1GB 00:08:41.241 fused_ordering(0) 00:08:41.241 fused_ordering(1) 00:08:41.241 fused_ordering(2) 00:08:41.241 fused_ordering(3) 00:08:41.241 fused_ordering(4) 00:08:41.241 fused_ordering(5) 00:08:41.241 fused_ordering(6) 00:08:41.241 fused_ordering(7) 00:08:41.241 fused_ordering(8) 00:08:41.241 fused_ordering(9) 00:08:41.241 fused_ordering(10) 00:08:41.241 fused_ordering(11) 00:08:41.241 fused_ordering(12) 00:08:41.241 fused_ordering(13) 00:08:41.241 fused_ordering(14) 00:08:41.241 fused_ordering(15) 00:08:41.241 fused_ordering(16) 00:08:41.241 fused_ordering(17) 00:08:41.241 fused_ordering(18) 00:08:41.241 fused_ordering(19) 00:08:41.241 fused_ordering(20) 00:08:41.241 fused_ordering(21) 00:08:41.241 fused_ordering(22) 00:08:41.241 fused_ordering(23) 00:08:41.241 fused_ordering(24) 00:08:41.241 fused_ordering(25) 00:08:41.241 fused_ordering(26) 00:08:41.241 fused_ordering(27) 00:08:41.241 fused_ordering(28) 00:08:41.241 fused_ordering(29) 00:08:41.241 fused_ordering(30) 00:08:41.241 fused_ordering(31) 00:08:41.241 fused_ordering(32) 00:08:41.241 fused_ordering(33) 00:08:41.241 fused_ordering(34) 00:08:41.241 fused_ordering(35) 00:08:41.241 fused_ordering(36) 00:08:41.241 fused_ordering(37) 00:08:41.241 fused_ordering(38) 00:08:41.241 fused_ordering(39) 00:08:41.241 fused_ordering(40) 00:08:41.241 fused_ordering(41) 00:08:41.241 fused_ordering(42) 00:08:41.241 fused_ordering(43) 00:08:41.241 fused_ordering(44) 00:08:41.241 fused_ordering(45) 00:08:41.241 fused_ordering(46) 00:08:41.241 fused_ordering(47) 00:08:41.241 fused_ordering(48) 00:08:41.241 fused_ordering(49) 00:08:41.241 fused_ordering(50) 00:08:41.242 fused_ordering(51) 00:08:41.242 fused_ordering(52) 00:08:41.242 fused_ordering(53) 00:08:41.242 fused_ordering(54) 00:08:41.242 fused_ordering(55) 00:08:41.242 fused_ordering(56) 00:08:41.242 fused_ordering(57) 00:08:41.242 fused_ordering(58) 00:08:41.242 fused_ordering(59) 00:08:41.242 fused_ordering(60) 00:08:41.242 fused_ordering(61) 00:08:41.242 fused_ordering(62) 00:08:41.242 fused_ordering(63) 00:08:41.242 fused_ordering(64) 00:08:41.242 fused_ordering(65) 00:08:41.242 fused_ordering(66) 00:08:41.242 fused_ordering(67) 00:08:41.242 fused_ordering(68) 00:08:41.242 fused_ordering(69) 00:08:41.242 fused_ordering(70) 00:08:41.242 fused_ordering(71) 00:08:41.242 fused_ordering(72) 00:08:41.242 fused_ordering(73) 00:08:41.242 fused_ordering(74) 00:08:41.242 fused_ordering(75) 00:08:41.242 fused_ordering(76) 00:08:41.242 fused_ordering(77) 00:08:41.242 fused_ordering(78) 00:08:41.242 fused_ordering(79) 00:08:41.242 fused_ordering(80) 00:08:41.242 fused_ordering(81) 00:08:41.242 fused_ordering(82) 00:08:41.242 fused_ordering(83) 00:08:41.242 fused_ordering(84) 00:08:41.242 fused_ordering(85) 00:08:41.242 fused_ordering(86) 00:08:41.242 fused_ordering(87) 00:08:41.242 fused_ordering(88) 00:08:41.242 fused_ordering(89) 00:08:41.242 fused_ordering(90) 00:08:41.242 fused_ordering(91) 00:08:41.242 fused_ordering(92) 00:08:41.242 fused_ordering(93) 00:08:41.242 fused_ordering(94) 00:08:41.242 fused_ordering(95) 00:08:41.242 fused_ordering(96) 00:08:41.242 fused_ordering(97) 00:08:41.242 fused_ordering(98) 00:08:41.242 fused_ordering(99) 00:08:41.242 fused_ordering(100) 00:08:41.242 fused_ordering(101) 00:08:41.242 fused_ordering(102) 00:08:41.242 fused_ordering(103) 00:08:41.242 fused_ordering(104) 00:08:41.242 fused_ordering(105) 00:08:41.242 fused_ordering(106) 00:08:41.242 fused_ordering(107) 00:08:41.242 fused_ordering(108) 00:08:41.242 fused_ordering(109) 00:08:41.242 fused_ordering(110) 00:08:41.242 fused_ordering(111) 00:08:41.242 fused_ordering(112) 00:08:41.242 fused_ordering(113) 00:08:41.242 fused_ordering(114) 00:08:41.242 fused_ordering(115) 00:08:41.242 fused_ordering(116) 00:08:41.242 fused_ordering(117) 00:08:41.242 fused_ordering(118) 00:08:41.242 fused_ordering(119) 00:08:41.242 fused_ordering(120) 00:08:41.242 fused_ordering(121) 00:08:41.242 fused_ordering(122) 00:08:41.242 fused_ordering(123) 00:08:41.242 fused_ordering(124) 00:08:41.242 fused_ordering(125) 00:08:41.242 fused_ordering(126) 00:08:41.242 fused_ordering(127) 00:08:41.242 fused_ordering(128) 00:08:41.242 fused_ordering(129) 00:08:41.242 fused_ordering(130) 00:08:41.242 fused_ordering(131) 00:08:41.242 fused_ordering(132) 00:08:41.242 fused_ordering(133) 00:08:41.242 fused_ordering(134) 00:08:41.242 fused_ordering(135) 00:08:41.242 fused_ordering(136) 00:08:41.242 fused_ordering(137) 00:08:41.242 fused_ordering(138) 00:08:41.242 fused_ordering(139) 00:08:41.242 fused_ordering(140) 00:08:41.242 fused_ordering(141) 00:08:41.242 fused_ordering(142) 00:08:41.242 fused_ordering(143) 00:08:41.242 fused_ordering(144) 00:08:41.242 fused_ordering(145) 00:08:41.242 fused_ordering(146) 00:08:41.242 fused_ordering(147) 00:08:41.242 fused_ordering(148) 00:08:41.242 fused_ordering(149) 00:08:41.242 fused_ordering(150) 00:08:41.242 fused_ordering(151) 00:08:41.242 fused_ordering(152) 00:08:41.242 fused_ordering(153) 00:08:41.242 fused_ordering(154) 00:08:41.242 fused_ordering(155) 00:08:41.242 fused_ordering(156) 00:08:41.242 fused_ordering(157) 00:08:41.242 fused_ordering(158) 00:08:41.242 fused_ordering(159) 00:08:41.242 fused_ordering(160) 00:08:41.242 fused_ordering(161) 00:08:41.242 fused_ordering(162) 00:08:41.242 fused_ordering(163) 00:08:41.242 fused_ordering(164) 00:08:41.242 fused_ordering(165) 00:08:41.242 fused_ordering(166) 00:08:41.242 fused_ordering(167) 00:08:41.242 fused_ordering(168) 00:08:41.242 fused_ordering(169) 00:08:41.242 fused_ordering(170) 00:08:41.242 fused_ordering(171) 00:08:41.242 fused_ordering(172) 00:08:41.242 fused_ordering(173) 00:08:41.242 fused_ordering(174) 00:08:41.242 fused_ordering(175) 00:08:41.242 fused_ordering(176) 00:08:41.242 fused_ordering(177) 00:08:41.242 fused_ordering(178) 00:08:41.242 fused_ordering(179) 00:08:41.242 fused_ordering(180) 00:08:41.242 fused_ordering(181) 00:08:41.242 fused_ordering(182) 00:08:41.242 fused_ordering(183) 00:08:41.242 fused_ordering(184) 00:08:41.242 fused_ordering(185) 00:08:41.242 fused_ordering(186) 00:08:41.242 fused_ordering(187) 00:08:41.242 fused_ordering(188) 00:08:41.242 fused_ordering(189) 00:08:41.242 fused_ordering(190) 00:08:41.242 fused_ordering(191) 00:08:41.242 fused_ordering(192) 00:08:41.242 fused_ordering(193) 00:08:41.242 fused_ordering(194) 00:08:41.242 fused_ordering(195) 00:08:41.242 fused_ordering(196) 00:08:41.242 fused_ordering(197) 00:08:41.242 fused_ordering(198) 00:08:41.242 fused_ordering(199) 00:08:41.242 fused_ordering(200) 00:08:41.242 fused_ordering(201) 00:08:41.242 fused_ordering(202) 00:08:41.242 fused_ordering(203) 00:08:41.242 fused_ordering(204) 00:08:41.242 fused_ordering(205) 00:08:41.509 fused_ordering(206) 00:08:41.509 fused_ordering(207) 00:08:41.509 fused_ordering(208) 00:08:41.509 fused_ordering(209) 00:08:41.509 fused_ordering(210) 00:08:41.509 fused_ordering(211) 00:08:41.509 fused_ordering(212) 00:08:41.509 fused_ordering(213) 00:08:41.509 fused_ordering(214) 00:08:41.509 fused_ordering(215) 00:08:41.509 fused_ordering(216) 00:08:41.509 fused_ordering(217) 00:08:41.509 fused_ordering(218) 00:08:41.509 fused_ordering(219) 00:08:41.509 fused_ordering(220) 00:08:41.509 fused_ordering(221) 00:08:41.510 fused_ordering(222) 00:08:41.510 fused_ordering(223) 00:08:41.510 fused_ordering(224) 00:08:41.510 fused_ordering(225) 00:08:41.510 fused_ordering(226) 00:08:41.510 fused_ordering(227) 00:08:41.510 fused_ordering(228) 00:08:41.510 fused_ordering(229) 00:08:41.510 fused_ordering(230) 00:08:41.510 fused_ordering(231) 00:08:41.510 fused_ordering(232) 00:08:41.510 fused_ordering(233) 00:08:41.510 fused_ordering(234) 00:08:41.510 fused_ordering(235) 00:08:41.510 fused_ordering(236) 00:08:41.510 fused_ordering(237) 00:08:41.510 fused_ordering(238) 00:08:41.510 fused_ordering(239) 00:08:41.510 fused_ordering(240) 00:08:41.510 fused_ordering(241) 00:08:41.510 fused_ordering(242) 00:08:41.510 fused_ordering(243) 00:08:41.510 fused_ordering(244) 00:08:41.510 fused_ordering(245) 00:08:41.510 fused_ordering(246) 00:08:41.510 fused_ordering(247) 00:08:41.510 fused_ordering(248) 00:08:41.510 fused_ordering(249) 00:08:41.510 fused_ordering(250) 00:08:41.510 fused_ordering(251) 00:08:41.510 fused_ordering(252) 00:08:41.510 fused_ordering(253) 00:08:41.510 fused_ordering(254) 00:08:41.510 fused_ordering(255) 00:08:41.510 fused_ordering(256) 00:08:41.510 fused_ordering(257) 00:08:41.510 fused_ordering(258) 00:08:41.510 fused_ordering(259) 00:08:41.510 fused_ordering(260) 00:08:41.510 fused_ordering(261) 00:08:41.510 fused_ordering(262) 00:08:41.510 fused_ordering(263) 00:08:41.510 fused_ordering(264) 00:08:41.510 fused_ordering(265) 00:08:41.510 fused_ordering(266) 00:08:41.510 fused_ordering(267) 00:08:41.510 fused_ordering(268) 00:08:41.510 fused_ordering(269) 00:08:41.510 fused_ordering(270) 00:08:41.510 fused_ordering(271) 00:08:41.510 fused_ordering(272) 00:08:41.510 fused_ordering(273) 00:08:41.510 fused_ordering(274) 00:08:41.510 fused_ordering(275) 00:08:41.510 fused_ordering(276) 00:08:41.510 fused_ordering(277) 00:08:41.510 fused_ordering(278) 00:08:41.510 fused_ordering(279) 00:08:41.510 fused_ordering(280) 00:08:41.510 fused_ordering(281) 00:08:41.510 fused_ordering(282) 00:08:41.510 fused_ordering(283) 00:08:41.510 fused_ordering(284) 00:08:41.510 fused_ordering(285) 00:08:41.510 fused_ordering(286) 00:08:41.510 fused_ordering(287) 00:08:41.510 fused_ordering(288) 00:08:41.510 fused_ordering(289) 00:08:41.510 fused_ordering(290) 00:08:41.510 fused_ordering(291) 00:08:41.510 fused_ordering(292) 00:08:41.510 fused_ordering(293) 00:08:41.510 fused_ordering(294) 00:08:41.510 fused_ordering(295) 00:08:41.510 fused_ordering(296) 00:08:41.510 fused_ordering(297) 00:08:41.510 fused_ordering(298) 00:08:41.510 fused_ordering(299) 00:08:41.510 fused_ordering(300) 00:08:41.510 fused_ordering(301) 00:08:41.510 fused_ordering(302) 00:08:41.510 fused_ordering(303) 00:08:41.510 fused_ordering(304) 00:08:41.510 fused_ordering(305) 00:08:41.510 fused_ordering(306) 00:08:41.510 fused_ordering(307) 00:08:41.510 fused_ordering(308) 00:08:41.510 fused_ordering(309) 00:08:41.510 fused_ordering(310) 00:08:41.510 fused_ordering(311) 00:08:41.510 fused_ordering(312) 00:08:41.510 fused_ordering(313) 00:08:41.510 fused_ordering(314) 00:08:41.510 fused_ordering(315) 00:08:41.510 fused_ordering(316) 00:08:41.510 fused_ordering(317) 00:08:41.510 fused_ordering(318) 00:08:41.510 fused_ordering(319) 00:08:41.510 fused_ordering(320) 00:08:41.510 fused_ordering(321) 00:08:41.510 fused_ordering(322) 00:08:41.510 fused_ordering(323) 00:08:41.510 fused_ordering(324) 00:08:41.510 fused_ordering(325) 00:08:41.510 fused_ordering(326) 00:08:41.510 fused_ordering(327) 00:08:41.510 fused_ordering(328) 00:08:41.510 fused_ordering(329) 00:08:41.510 fused_ordering(330) 00:08:41.510 fused_ordering(331) 00:08:41.510 fused_ordering(332) 00:08:41.510 fused_ordering(333) 00:08:41.510 fused_ordering(334) 00:08:41.510 fused_ordering(335) 00:08:41.510 fused_ordering(336) 00:08:41.510 fused_ordering(337) 00:08:41.510 fused_ordering(338) 00:08:41.510 fused_ordering(339) 00:08:41.510 fused_ordering(340) 00:08:41.510 fused_ordering(341) 00:08:41.510 fused_ordering(342) 00:08:41.510 fused_ordering(343) 00:08:41.510 fused_ordering(344) 00:08:41.510 fused_ordering(345) 00:08:41.510 fused_ordering(346) 00:08:41.510 fused_ordering(347) 00:08:41.510 fused_ordering(348) 00:08:41.510 fused_ordering(349) 00:08:41.510 fused_ordering(350) 00:08:41.510 fused_ordering(351) 00:08:41.510 fused_ordering(352) 00:08:41.510 fused_ordering(353) 00:08:41.510 fused_ordering(354) 00:08:41.510 fused_ordering(355) 00:08:41.510 fused_ordering(356) 00:08:41.510 fused_ordering(357) 00:08:41.510 fused_ordering(358) 00:08:41.510 fused_ordering(359) 00:08:41.510 fused_ordering(360) 00:08:41.510 fused_ordering(361) 00:08:41.510 fused_ordering(362) 00:08:41.510 fused_ordering(363) 00:08:41.510 fused_ordering(364) 00:08:41.510 fused_ordering(365) 00:08:41.510 fused_ordering(366) 00:08:41.510 fused_ordering(367) 00:08:41.510 fused_ordering(368) 00:08:41.510 fused_ordering(369) 00:08:41.510 fused_ordering(370) 00:08:41.510 fused_ordering(371) 00:08:41.510 fused_ordering(372) 00:08:41.510 fused_ordering(373) 00:08:41.510 fused_ordering(374) 00:08:41.510 fused_ordering(375) 00:08:41.510 fused_ordering(376) 00:08:41.510 fused_ordering(377) 00:08:41.510 fused_ordering(378) 00:08:41.510 fused_ordering(379) 00:08:41.510 fused_ordering(380) 00:08:41.510 fused_ordering(381) 00:08:41.510 fused_ordering(382) 00:08:41.510 fused_ordering(383) 00:08:41.510 fused_ordering(384) 00:08:41.510 fused_ordering(385) 00:08:41.510 fused_ordering(386) 00:08:41.510 fused_ordering(387) 00:08:41.511 fused_ordering(388) 00:08:41.511 fused_ordering(389) 00:08:41.511 fused_ordering(390) 00:08:41.511 fused_ordering(391) 00:08:41.511 fused_ordering(392) 00:08:41.511 fused_ordering(393) 00:08:41.511 fused_ordering(394) 00:08:41.511 fused_ordering(395) 00:08:41.511 fused_ordering(396) 00:08:41.511 fused_ordering(397) 00:08:41.511 fused_ordering(398) 00:08:41.511 fused_ordering(399) 00:08:41.511 fused_ordering(400) 00:08:41.511 fused_ordering(401) 00:08:41.511 fused_ordering(402) 00:08:41.511 fused_ordering(403) 00:08:41.511 fused_ordering(404) 00:08:41.511 fused_ordering(405) 00:08:41.511 fused_ordering(406) 00:08:41.511 fused_ordering(407) 00:08:41.511 fused_ordering(408) 00:08:41.511 fused_ordering(409) 00:08:41.511 fused_ordering(410) 00:08:42.088 fused_ordering(411) 00:08:42.088 fused_ordering(412) 00:08:42.088 fused_ordering(413) 00:08:42.088 fused_ordering(414) 00:08:42.088 fused_ordering(415) 00:08:42.088 fused_ordering(416) 00:08:42.088 fused_ordering(417) 00:08:42.088 fused_ordering(418) 00:08:42.088 fused_ordering(419) 00:08:42.088 fused_ordering(420) 00:08:42.088 fused_ordering(421) 00:08:42.088 fused_ordering(422) 00:08:42.088 fused_ordering(423) 00:08:42.088 fused_ordering(424) 00:08:42.088 fused_ordering(425) 00:08:42.088 fused_ordering(426) 00:08:42.088 fused_ordering(427) 00:08:42.088 fused_ordering(428) 00:08:42.088 fused_ordering(429) 00:08:42.088 fused_ordering(430) 00:08:42.088 fused_ordering(431) 00:08:42.088 fused_ordering(432) 00:08:42.088 fused_ordering(433) 00:08:42.088 fused_ordering(434) 00:08:42.088 fused_ordering(435) 00:08:42.088 fused_ordering(436) 00:08:42.088 fused_ordering(437) 00:08:42.088 fused_ordering(438) 00:08:42.088 fused_ordering(439) 00:08:42.088 fused_ordering(440) 00:08:42.088 fused_ordering(441) 00:08:42.088 fused_ordering(442) 00:08:42.088 fused_ordering(443) 00:08:42.088 fused_ordering(444) 00:08:42.088 fused_ordering(445) 00:08:42.088 fused_ordering(446) 00:08:42.088 fused_ordering(447) 00:08:42.088 fused_ordering(448) 00:08:42.088 fused_ordering(449) 00:08:42.088 fused_ordering(450) 00:08:42.088 fused_ordering(451) 00:08:42.088 fused_ordering(452) 00:08:42.088 fused_ordering(453) 00:08:42.088 fused_ordering(454) 00:08:42.088 fused_ordering(455) 00:08:42.088 fused_ordering(456) 00:08:42.088 fused_ordering(457) 00:08:42.088 fused_ordering(458) 00:08:42.088 fused_ordering(459) 00:08:42.088 fused_ordering(460) 00:08:42.088 fused_ordering(461) 00:08:42.088 fused_ordering(462) 00:08:42.088 fused_ordering(463) 00:08:42.088 fused_ordering(464) 00:08:42.088 fused_ordering(465) 00:08:42.088 fused_ordering(466) 00:08:42.088 fused_ordering(467) 00:08:42.088 fused_ordering(468) 00:08:42.088 fused_ordering(469) 00:08:42.088 fused_ordering(470) 00:08:42.088 fused_ordering(471) 00:08:42.088 fused_ordering(472) 00:08:42.088 fused_ordering(473) 00:08:42.088 fused_ordering(474) 00:08:42.088 fused_ordering(475) 00:08:42.088 fused_ordering(476) 00:08:42.088 fused_ordering(477) 00:08:42.088 fused_ordering(478) 00:08:42.088 fused_ordering(479) 00:08:42.088 fused_ordering(480) 00:08:42.088 fused_ordering(481) 00:08:42.088 fused_ordering(482) 00:08:42.088 fused_ordering(483) 00:08:42.088 fused_ordering(484) 00:08:42.088 fused_ordering(485) 00:08:42.088 fused_ordering(486) 00:08:42.088 fused_ordering(487) 00:08:42.088 fused_ordering(488) 00:08:42.088 fused_ordering(489) 00:08:42.088 fused_ordering(490) 00:08:42.088 fused_ordering(491) 00:08:42.088 fused_ordering(492) 00:08:42.088 fused_ordering(493) 00:08:42.088 fused_ordering(494) 00:08:42.088 fused_ordering(495) 00:08:42.088 fused_ordering(496) 00:08:42.088 fused_ordering(497) 00:08:42.088 fused_ordering(498) 00:08:42.088 fused_ordering(499) 00:08:42.088 fused_ordering(500) 00:08:42.088 fused_ordering(501) 00:08:42.088 fused_ordering(502) 00:08:42.088 fused_ordering(503) 00:08:42.088 fused_ordering(504) 00:08:42.088 fused_ordering(505) 00:08:42.088 fused_ordering(506) 00:08:42.088 fused_ordering(507) 00:08:42.088 fused_ordering(508) 00:08:42.088 fused_ordering(509) 00:08:42.088 fused_ordering(510) 00:08:42.088 fused_ordering(511) 00:08:42.088 fused_ordering(512) 00:08:42.088 fused_ordering(513) 00:08:42.088 fused_ordering(514) 00:08:42.088 fused_ordering(515) 00:08:42.088 fused_ordering(516) 00:08:42.088 fused_ordering(517) 00:08:42.088 fused_ordering(518) 00:08:42.088 fused_ordering(519) 00:08:42.088 fused_ordering(520) 00:08:42.088 fused_ordering(521) 00:08:42.088 fused_ordering(522) 00:08:42.088 fused_ordering(523) 00:08:42.088 fused_ordering(524) 00:08:42.088 fused_ordering(525) 00:08:42.088 fused_ordering(526) 00:08:42.088 fused_ordering(527) 00:08:42.088 fused_ordering(528) 00:08:42.088 fused_ordering(529) 00:08:42.088 fused_ordering(530) 00:08:42.088 fused_ordering(531) 00:08:42.088 fused_ordering(532) 00:08:42.088 fused_ordering(533) 00:08:42.088 fused_ordering(534) 00:08:42.088 fused_ordering(535) 00:08:42.088 fused_ordering(536) 00:08:42.088 fused_ordering(537) 00:08:42.088 fused_ordering(538) 00:08:42.088 fused_ordering(539) 00:08:42.088 fused_ordering(540) 00:08:42.088 fused_ordering(541) 00:08:42.088 fused_ordering(542) 00:08:42.088 fused_ordering(543) 00:08:42.088 fused_ordering(544) 00:08:42.088 fused_ordering(545) 00:08:42.088 fused_ordering(546) 00:08:42.088 fused_ordering(547) 00:08:42.088 fused_ordering(548) 00:08:42.088 fused_ordering(549) 00:08:42.088 fused_ordering(550) 00:08:42.088 fused_ordering(551) 00:08:42.088 fused_ordering(552) 00:08:42.088 fused_ordering(553) 00:08:42.088 fused_ordering(554) 00:08:42.088 fused_ordering(555) 00:08:42.088 fused_ordering(556) 00:08:42.088 fused_ordering(557) 00:08:42.088 fused_ordering(558) 00:08:42.088 fused_ordering(559) 00:08:42.088 fused_ordering(560) 00:08:42.088 fused_ordering(561) 00:08:42.088 fused_ordering(562) 00:08:42.088 fused_ordering(563) 00:08:42.088 fused_ordering(564) 00:08:42.088 fused_ordering(565) 00:08:42.088 fused_ordering(566) 00:08:42.088 fused_ordering(567) 00:08:42.088 fused_ordering(568) 00:08:42.088 fused_ordering(569) 00:08:42.088 fused_ordering(570) 00:08:42.088 fused_ordering(571) 00:08:42.088 fused_ordering(572) 00:08:42.088 fused_ordering(573) 00:08:42.088 fused_ordering(574) 00:08:42.088 fused_ordering(575) 00:08:42.088 fused_ordering(576) 00:08:42.088 fused_ordering(577) 00:08:42.088 fused_ordering(578) 00:08:42.088 fused_ordering(579) 00:08:42.088 fused_ordering(580) 00:08:42.088 fused_ordering(581) 00:08:42.088 fused_ordering(582) 00:08:42.088 fused_ordering(583) 00:08:42.088 fused_ordering(584) 00:08:42.088 fused_ordering(585) 00:08:42.088 fused_ordering(586) 00:08:42.088 fused_ordering(587) 00:08:42.088 fused_ordering(588) 00:08:42.088 fused_ordering(589) 00:08:42.088 fused_ordering(590) 00:08:42.088 fused_ordering(591) 00:08:42.088 fused_ordering(592) 00:08:42.088 fused_ordering(593) 00:08:42.088 fused_ordering(594) 00:08:42.088 fused_ordering(595) 00:08:42.088 fused_ordering(596) 00:08:42.088 fused_ordering(597) 00:08:42.088 fused_ordering(598) 00:08:42.088 fused_ordering(599) 00:08:42.088 fused_ordering(600) 00:08:42.088 fused_ordering(601) 00:08:42.088 fused_ordering(602) 00:08:42.088 fused_ordering(603) 00:08:42.088 fused_ordering(604) 00:08:42.088 fused_ordering(605) 00:08:42.088 fused_ordering(606) 00:08:42.088 fused_ordering(607) 00:08:42.088 fused_ordering(608) 00:08:42.088 fused_ordering(609) 00:08:42.088 fused_ordering(610) 00:08:42.088 fused_ordering(611) 00:08:42.088 fused_ordering(612) 00:08:42.088 fused_ordering(613) 00:08:42.088 fused_ordering(614) 00:08:42.088 fused_ordering(615) 00:08:42.651 fused_ordering(616) 00:08:42.651 fused_ordering(617) 00:08:42.651 fused_ordering(618) 00:08:42.651 fused_ordering(619) 00:08:42.651 fused_ordering(620) 00:08:42.651 fused_ordering(621) 00:08:42.651 fused_ordering(622) 00:08:42.651 fused_ordering(623) 00:08:42.651 fused_ordering(624) 00:08:42.651 fused_ordering(625) 00:08:42.651 fused_ordering(626) 00:08:42.651 fused_ordering(627) 00:08:42.651 fused_ordering(628) 00:08:42.651 fused_ordering(629) 00:08:42.651 fused_ordering(630) 00:08:42.651 fused_ordering(631) 00:08:42.651 fused_ordering(632) 00:08:42.651 fused_ordering(633) 00:08:42.651 fused_ordering(634) 00:08:42.651 fused_ordering(635) 00:08:42.651 fused_ordering(636) 00:08:42.651 fused_ordering(637) 00:08:42.651 fused_ordering(638) 00:08:42.651 fused_ordering(639) 00:08:42.651 fused_ordering(640) 00:08:42.651 fused_ordering(641) 00:08:42.651 fused_ordering(642) 00:08:42.651 fused_ordering(643) 00:08:42.651 fused_ordering(644) 00:08:42.651 fused_ordering(645) 00:08:42.651 fused_ordering(646) 00:08:42.651 fused_ordering(647) 00:08:42.651 fused_ordering(648) 00:08:42.651 fused_ordering(649) 00:08:42.651 fused_ordering(650) 00:08:42.651 fused_ordering(651) 00:08:42.651 fused_ordering(652) 00:08:42.651 fused_ordering(653) 00:08:42.651 fused_ordering(654) 00:08:42.651 fused_ordering(655) 00:08:42.651 fused_ordering(656) 00:08:42.651 fused_ordering(657) 00:08:42.651 fused_ordering(658) 00:08:42.651 fused_ordering(659) 00:08:42.651 fused_ordering(660) 00:08:42.651 fused_ordering(661) 00:08:42.651 fused_ordering(662) 00:08:42.651 fused_ordering(663) 00:08:42.651 fused_ordering(664) 00:08:42.651 fused_ordering(665) 00:08:42.651 fused_ordering(666) 00:08:42.651 fused_ordering(667) 00:08:42.651 fused_ordering(668) 00:08:42.651 fused_ordering(669) 00:08:42.651 fused_ordering(670) 00:08:42.651 fused_ordering(671) 00:08:42.651 fused_ordering(672) 00:08:42.651 fused_ordering(673) 00:08:42.651 fused_ordering(674) 00:08:42.651 fused_ordering(675) 00:08:42.651 fused_ordering(676) 00:08:42.651 fused_ordering(677) 00:08:42.651 fused_ordering(678) 00:08:42.651 fused_ordering(679) 00:08:42.651 fused_ordering(680) 00:08:42.651 fused_ordering(681) 00:08:42.651 fused_ordering(682) 00:08:42.651 fused_ordering(683) 00:08:42.651 fused_ordering(684) 00:08:42.651 fused_ordering(685) 00:08:42.651 fused_ordering(686) 00:08:42.652 fused_ordering(687) 00:08:42.652 fused_ordering(688) 00:08:42.652 fused_ordering(689) 00:08:42.652 fused_ordering(690) 00:08:42.652 fused_ordering(691) 00:08:42.652 fused_ordering(692) 00:08:42.652 fused_ordering(693) 00:08:42.652 fused_ordering(694) 00:08:42.652 fused_ordering(695) 00:08:42.652 fused_ordering(696) 00:08:42.652 fused_ordering(697) 00:08:42.652 fused_ordering(698) 00:08:42.652 fused_ordering(699) 00:08:42.652 fused_ordering(700) 00:08:42.652 fused_ordering(701) 00:08:42.652 fused_ordering(702) 00:08:42.652 fused_ordering(703) 00:08:42.652 fused_ordering(704) 00:08:42.652 fused_ordering(705) 00:08:42.652 fused_ordering(706) 00:08:42.652 fused_ordering(707) 00:08:42.652 fused_ordering(708) 00:08:42.652 fused_ordering(709) 00:08:42.652 fused_ordering(710) 00:08:42.652 fused_ordering(711) 00:08:42.652 fused_ordering(712) 00:08:42.652 fused_ordering(713) 00:08:42.652 fused_ordering(714) 00:08:42.652 fused_ordering(715) 00:08:42.652 fused_ordering(716) 00:08:42.652 fused_ordering(717) 00:08:42.652 fused_ordering(718) 00:08:42.652 fused_ordering(719) 00:08:42.652 fused_ordering(720) 00:08:42.652 fused_ordering(721) 00:08:42.652 fused_ordering(722) 00:08:42.652 fused_ordering(723) 00:08:42.652 fused_ordering(724) 00:08:42.652 fused_ordering(725) 00:08:42.652 fused_ordering(726) 00:08:42.652 fused_ordering(727) 00:08:42.652 fused_ordering(728) 00:08:42.652 fused_ordering(729) 00:08:42.652 fused_ordering(730) 00:08:42.652 fused_ordering(731) 00:08:42.652 fused_ordering(732) 00:08:42.652 fused_ordering(733) 00:08:42.652 fused_ordering(734) 00:08:42.652 fused_ordering(735) 00:08:42.652 fused_ordering(736) 00:08:42.652 fused_ordering(737) 00:08:42.652 fused_ordering(738) 00:08:42.652 fused_ordering(739) 00:08:42.652 fused_ordering(740) 00:08:42.652 fused_ordering(741) 00:08:42.652 fused_ordering(742) 00:08:42.652 fused_ordering(743) 00:08:42.652 fused_ordering(744) 00:08:42.652 fused_ordering(745) 00:08:42.652 fused_ordering(746) 00:08:42.652 fused_ordering(747) 00:08:42.652 fused_ordering(748) 00:08:42.652 fused_ordering(749) 00:08:42.652 fused_ordering(750) 00:08:42.652 fused_ordering(751) 00:08:42.652 fused_ordering(752) 00:08:42.652 fused_ordering(753) 00:08:42.652 fused_ordering(754) 00:08:42.652 fused_ordering(755) 00:08:42.652 fused_ordering(756) 00:08:42.652 fused_ordering(757) 00:08:42.652 fused_ordering(758) 00:08:42.652 fused_ordering(759) 00:08:42.652 fused_ordering(760) 00:08:42.652 fused_ordering(761) 00:08:42.652 fused_ordering(762) 00:08:42.652 fused_ordering(763) 00:08:42.652 fused_ordering(764) 00:08:42.652 fused_ordering(765) 00:08:42.652 fused_ordering(766) 00:08:42.652 fused_ordering(767) 00:08:42.652 fused_ordering(768) 00:08:42.652 fused_ordering(769) 00:08:42.652 fused_ordering(770) 00:08:42.652 fused_ordering(771) 00:08:42.652 fused_ordering(772) 00:08:42.652 fused_ordering(773) 00:08:42.652 fused_ordering(774) 00:08:42.652 fused_ordering(775) 00:08:42.652 fused_ordering(776) 00:08:42.652 fused_ordering(777) 00:08:42.652 fused_ordering(778) 00:08:42.652 fused_ordering(779) 00:08:42.652 fused_ordering(780) 00:08:42.652 fused_ordering(781) 00:08:42.652 fused_ordering(782) 00:08:42.652 fused_ordering(783) 00:08:42.652 fused_ordering(784) 00:08:42.652 fused_ordering(785) 00:08:42.652 fused_ordering(786) 00:08:42.652 fused_ordering(787) 00:08:42.652 fused_ordering(788) 00:08:42.652 fused_ordering(789) 00:08:42.652 fused_ordering(790) 00:08:42.652 fused_ordering(791) 00:08:42.652 fused_ordering(792) 00:08:42.652 fused_ordering(793) 00:08:42.652 fused_ordering(794) 00:08:42.652 fused_ordering(795) 00:08:42.652 fused_ordering(796) 00:08:42.652 fused_ordering(797) 00:08:42.652 fused_ordering(798) 00:08:42.652 fused_ordering(799) 00:08:42.652 fused_ordering(800) 00:08:42.652 fused_ordering(801) 00:08:42.652 fused_ordering(802) 00:08:42.652 fused_ordering(803) 00:08:42.652 fused_ordering(804) 00:08:42.652 fused_ordering(805) 00:08:42.652 fused_ordering(806) 00:08:42.652 fused_ordering(807) 00:08:42.652 fused_ordering(808) 00:08:42.652 fused_ordering(809) 00:08:42.652 fused_ordering(810) 00:08:42.652 fused_ordering(811) 00:08:42.652 fused_ordering(812) 00:08:42.652 fused_ordering(813) 00:08:42.652 fused_ordering(814) 00:08:42.652 fused_ordering(815) 00:08:42.652 fused_ordering(816) 00:08:42.652 fused_ordering(817) 00:08:42.652 fused_ordering(818) 00:08:42.652 fused_ordering(819) 00:08:42.652 fused_ordering(820) 00:08:43.584 fused_ordering(821) 00:08:43.584 fused_ordering(822) 00:08:43.584 fused_ordering(823) 00:08:43.584 fused_ordering(824) 00:08:43.584 fused_ordering(825) 00:08:43.584 fused_ordering(826) 00:08:43.584 fused_ordering(827) 00:08:43.584 fused_ordering(828) 00:08:43.584 fused_ordering(829) 00:08:43.584 fused_ordering(830) 00:08:43.584 fused_ordering(831) 00:08:43.584 fused_ordering(832) 00:08:43.584 fused_ordering(833) 00:08:43.584 fused_ordering(834) 00:08:43.584 fused_ordering(835) 00:08:43.584 fused_ordering(836) 00:08:43.584 fused_ordering(837) 00:08:43.584 fused_ordering(838) 00:08:43.584 fused_ordering(839) 00:08:43.584 fused_ordering(840) 00:08:43.584 fused_ordering(841) 00:08:43.584 fused_ordering(842) 00:08:43.584 fused_ordering(843) 00:08:43.584 fused_ordering(844) 00:08:43.584 fused_ordering(845) 00:08:43.584 fused_ordering(846) 00:08:43.584 fused_ordering(847) 00:08:43.584 fused_ordering(848) 00:08:43.584 fused_ordering(849) 00:08:43.584 fused_ordering(850) 00:08:43.584 fused_ordering(851) 00:08:43.584 fused_ordering(852) 00:08:43.584 fused_ordering(853) 00:08:43.584 fused_ordering(854) 00:08:43.584 fused_ordering(855) 00:08:43.584 fused_ordering(856) 00:08:43.584 fused_ordering(857) 00:08:43.584 fused_ordering(858) 00:08:43.584 fused_ordering(859) 00:08:43.584 fused_ordering(860) 00:08:43.584 fused_ordering(861) 00:08:43.584 fused_ordering(862) 00:08:43.584 fused_ordering(863) 00:08:43.584 fused_ordering(864) 00:08:43.584 fused_ordering(865) 00:08:43.584 fused_ordering(866) 00:08:43.584 fused_ordering(867) 00:08:43.584 fused_ordering(868) 00:08:43.584 fused_ordering(869) 00:08:43.584 fused_ordering(870) 00:08:43.584 fused_ordering(871) 00:08:43.584 fused_ordering(872) 00:08:43.584 fused_ordering(873) 00:08:43.584 fused_ordering(874) 00:08:43.584 fused_ordering(875) 00:08:43.584 fused_ordering(876) 00:08:43.584 fused_ordering(877) 00:08:43.584 fused_ordering(878) 00:08:43.584 fused_ordering(879) 00:08:43.584 fused_ordering(880) 00:08:43.584 fused_ordering(881) 00:08:43.584 fused_ordering(882) 00:08:43.584 fused_ordering(883) 00:08:43.584 fused_ordering(884) 00:08:43.584 fused_ordering(885) 00:08:43.584 fused_ordering(886) 00:08:43.584 fused_ordering(887) 00:08:43.584 fused_ordering(888) 00:08:43.584 fused_ordering(889) 00:08:43.584 fused_ordering(890) 00:08:43.584 fused_ordering(891) 00:08:43.584 fused_ordering(892) 00:08:43.584 fused_ordering(893) 00:08:43.584 fused_ordering(894) 00:08:43.584 fused_ordering(895) 00:08:43.584 fused_ordering(896) 00:08:43.584 fused_ordering(897) 00:08:43.584 fused_ordering(898) 00:08:43.584 fused_ordering(899) 00:08:43.584 fused_ordering(900) 00:08:43.584 fused_ordering(901) 00:08:43.584 fused_ordering(902) 00:08:43.584 fused_ordering(903) 00:08:43.584 fused_ordering(904) 00:08:43.584 fused_ordering(905) 00:08:43.584 fused_ordering(906) 00:08:43.584 fused_ordering(907) 00:08:43.584 fused_ordering(908) 00:08:43.584 fused_ordering(909) 00:08:43.584 fused_ordering(910) 00:08:43.584 fused_ordering(911) 00:08:43.584 fused_ordering(912) 00:08:43.584 fused_ordering(913) 00:08:43.584 fused_ordering(914) 00:08:43.584 fused_ordering(915) 00:08:43.584 fused_ordering(916) 00:08:43.584 fused_ordering(917) 00:08:43.584 fused_ordering(918) 00:08:43.584 fused_ordering(919) 00:08:43.584 fused_ordering(920) 00:08:43.584 fused_ordering(921) 00:08:43.584 fused_ordering(922) 00:08:43.584 fused_ordering(923) 00:08:43.584 fused_ordering(924) 00:08:43.584 fused_ordering(925) 00:08:43.584 fused_ordering(926) 00:08:43.584 fused_ordering(927) 00:08:43.584 fused_ordering(928) 00:08:43.584 fused_ordering(929) 00:08:43.584 fused_ordering(930) 00:08:43.584 fused_ordering(931) 00:08:43.584 fused_ordering(932) 00:08:43.584 fused_ordering(933) 00:08:43.584 fused_ordering(934) 00:08:43.584 fused_ordering(935) 00:08:43.584 fused_ordering(936) 00:08:43.584 fused_ordering(937) 00:08:43.584 fused_ordering(938) 00:08:43.584 fused_ordering(939) 00:08:43.584 fused_ordering(940) 00:08:43.584 fused_ordering(941) 00:08:43.584 fused_ordering(942) 00:08:43.584 fused_ordering(943) 00:08:43.584 fused_ordering(944) 00:08:43.584 fused_ordering(945) 00:08:43.584 fused_ordering(946) 00:08:43.584 fused_ordering(947) 00:08:43.584 fused_ordering(948) 00:08:43.584 fused_ordering(949) 00:08:43.584 fused_ordering(950) 00:08:43.584 fused_ordering(951) 00:08:43.584 fused_ordering(952) 00:08:43.584 fused_ordering(953) 00:08:43.584 fused_ordering(954) 00:08:43.584 fused_ordering(955) 00:08:43.584 fused_ordering(956) 00:08:43.584 fused_ordering(957) 00:08:43.584 fused_ordering(958) 00:08:43.584 fused_ordering(959) 00:08:43.584 fused_ordering(960) 00:08:43.584 fused_ordering(961) 00:08:43.584 fused_ordering(962) 00:08:43.584 fused_ordering(963) 00:08:43.584 fused_ordering(964) 00:08:43.584 fused_ordering(965) 00:08:43.584 fused_ordering(966) 00:08:43.584 fused_ordering(967) 00:08:43.584 fused_ordering(968) 00:08:43.584 fused_ordering(969) 00:08:43.584 fused_ordering(970) 00:08:43.584 fused_ordering(971) 00:08:43.584 fused_ordering(972) 00:08:43.584 fused_ordering(973) 00:08:43.584 fused_ordering(974) 00:08:43.584 fused_ordering(975) 00:08:43.584 fused_ordering(976) 00:08:43.584 fused_ordering(977) 00:08:43.584 fused_ordering(978) 00:08:43.584 fused_ordering(979) 00:08:43.584 fused_ordering(980) 00:08:43.584 fused_ordering(981) 00:08:43.584 fused_ordering(982) 00:08:43.584 fused_ordering(983) 00:08:43.584 fused_ordering(984) 00:08:43.584 fused_ordering(985) 00:08:43.584 fused_ordering(986) 00:08:43.584 fused_ordering(987) 00:08:43.584 fused_ordering(988) 00:08:43.584 fused_ordering(989) 00:08:43.584 fused_ordering(990) 00:08:43.584 fused_ordering(991) 00:08:43.585 fused_ordering(992) 00:08:43.585 fused_ordering(993) 00:08:43.585 fused_ordering(994) 00:08:43.585 fused_ordering(995) 00:08:43.585 fused_ordering(996) 00:08:43.585 fused_ordering(997) 00:08:43.585 fused_ordering(998) 00:08:43.585 fused_ordering(999) 00:08:43.585 fused_ordering(1000) 00:08:43.585 fused_ordering(1001) 00:08:43.585 fused_ordering(1002) 00:08:43.585 fused_ordering(1003) 00:08:43.585 fused_ordering(1004) 00:08:43.585 fused_ordering(1005) 00:08:43.585 fused_ordering(1006) 00:08:43.585 fused_ordering(1007) 00:08:43.585 fused_ordering(1008) 00:08:43.585 fused_ordering(1009) 00:08:43.585 fused_ordering(1010) 00:08:43.585 fused_ordering(1011) 00:08:43.585 fused_ordering(1012) 00:08:43.585 fused_ordering(1013) 00:08:43.585 fused_ordering(1014) 00:08:43.585 fused_ordering(1015) 00:08:43.585 fused_ordering(1016) 00:08:43.585 fused_ordering(1017) 00:08:43.585 fused_ordering(1018) 00:08:43.585 fused_ordering(1019) 00:08:43.585 fused_ordering(1020) 00:08:43.585 fused_ordering(1021) 00:08:43.585 fused_ordering(1022) 00:08:43.585 fused_ordering(1023) 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.585 13:01:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.585 rmmod nvme_tcp 00:08:43.585 rmmod nvme_fabrics 00:08:43.585 rmmod nvme_keyring 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3762922 ']' 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3762922 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3762922 ']' 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3762922 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3762922 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3762922' 00:08:43.585 killing process with pid 3762922 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3762922 00:08:43.585 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3762922 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.843 13:01:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.747 13:01:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.747 00:08:45.747 real 0m7.754s 00:08:45.747 user 0m5.484s 00:08:45.747 sys 0m3.384s 00:08:45.747 13:01:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.747 13:01:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:45.747 ************************************ 00:08:45.747 END TEST nvmf_fused_ordering 00:08:45.747 ************************************ 00:08:45.747 13:01:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:45.747 13:01:07 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:45.747 13:01:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.747 13:01:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.747 13:01:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:46.007 ************************************ 00:08:46.007 START TEST nvmf_delete_subsystem 00:08:46.007 ************************************ 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:46.007 * Looking for test storage... 00:08:46.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.007 13:01:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.912 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:47.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:47.913 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:47.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:47.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:47.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:08:47.913 00:08:47.913 --- 10.0.0.2 ping statistics --- 00:08:47.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.913 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:08:47.913 00:08:47.913 --- 10.0.0.1 ping statistics --- 00:08:47.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.913 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3765267 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3765267 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3765267 ']' 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.913 13:01:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.172 [2024-07-15 13:01:09.629489] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:08:48.172 [2024-07-15 13:01:09.629567] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.172 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.172 [2024-07-15 13:01:09.697670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:48.172 [2024-07-15 13:01:09.814105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.172 [2024-07-15 13:01:09.814168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.172 [2024-07-15 13:01:09.814198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.172 [2024-07-15 13:01:09.814212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.172 [2024-07-15 13:01:09.814224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.172 [2024-07-15 13:01:09.814310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.172 [2024-07-15 13:01:09.814317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 [2024-07-15 13:01:10.645775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 [2024-07-15 13:01:10.661989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 NULL1 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 Delay0 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3765421 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:49.106 13:01:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:49.106 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.106 [2024-07-15 13:01:10.746815] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:51.004 13:01:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.004 13:01:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.004 13:01:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 starting I/O failed: -6 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.262 Write completed with error (sct=0, sc=8) 00:08:51.262 [2024-07-15 13:01:12.877555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bf980 is same with the state(5) to be set 00:08:51.262 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 [2024-07-15 13:01:12.878352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb88400d430 is same with the state(5) to be set 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 starting I/O failed: -6 00:08:51.263 starting I/O failed: -6 00:08:51.263 starting I/O failed: -6 00:08:51.263 starting I/O failed: -6 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:51.263 Read completed with error (sct=0, sc=8) 00:08:51.263 Write completed with error (sct=0, sc=8) 00:08:52.196 [2024-07-15 13:01:13.843447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c0ac0 is same with the state(5) to be set 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 [2024-07-15 13:01:13.876787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb88400d740 is same with the state(5) to be set 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 [2024-07-15 13:01:13.877024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb88400cfe0 is same with the state(5) to be set 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 [2024-07-15 13:01:13.879259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bf3e0 is same with the state(5) to be set 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Write completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 Read completed with error (sct=0, sc=8) 00:08:52.196 [2024-07-15 13:01:13.879958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bf7a0 is same with the state(5) to be set 00:08:52.196 13:01:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.196 Initializing NVMe Controllers 00:08:52.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.196 Controller IO queue size 128, less than required. 00:08:52.196 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:52.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:52.196 Initialization complete. Launching workers. 00:08:52.196 ======================================================== 00:08:52.196 Latency(us) 00:08:52.196 Device Information : IOPS MiB/s Average min max 00:08:52.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.73 0.08 890483.51 845.29 1012123.67 00:08:52.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.29 0.08 911952.64 337.51 1013470.60 00:08:52.196 ======================================================== 00:08:52.196 Total : 337.02 0.16 900949.32 337.51 1013470.60 00:08:52.196 00:08:52.196 13:01:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:52.196 [2024-07-15 13:01:13.880465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c0ac0 (9): Bad file descriptor 00:08:52.196 13:01:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3765421 00:08:52.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:52.196 13:01:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3765421 00:08:52.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3765421) - No such process 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3765421 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3765421 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3765421 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.762 [2024-07-15 13:01:14.401637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3765830 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:52.762 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:52.762 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.762 [2024-07-15 13:01:14.459325] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:53.328 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.328 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:53.328 13:01:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.893 13:01:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.893 13:01:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:53.893 13:01:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.458 13:01:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.458 13:01:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:54.458 13:01:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.023 13:01:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.023 13:01:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:55.023 13:01:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.281 13:01:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.281 13:01:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:55.281 13:01:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.848 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.848 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:55.848 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.106 Initializing NVMe Controllers 00:08:56.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.106 Controller IO queue size 128, less than required. 00:08:56.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:56.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:56.106 Initialization complete. Launching workers. 00:08:56.106 ======================================================== 00:08:56.106 Latency(us) 00:08:56.106 Device Information : IOPS MiB/s Average min max 00:08:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003750.74 1000256.92 1041902.04 00:08:56.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005349.35 1000271.20 1012678.49 00:08:56.106 ======================================================== 00:08:56.106 Total : 256.00 0.12 1004550.04 1000256.92 1041902.04 00:08:56.106 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3765830 00:08:56.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3765830) - No such process 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3765830 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.364 rmmod nvme_tcp 00:08:56.364 rmmod nvme_fabrics 00:08:56.364 rmmod nvme_keyring 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3765267 ']' 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3765267 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3765267 ']' 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3765267 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.364 13:01:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3765267 00:08:56.364 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:56.364 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:56.364 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3765267' 00:08:56.364 killing process with pid 3765267 00:08:56.364 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3765267 00:08:56.364 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3765267 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.623 13:01:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.158 13:01:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:59.158 00:08:59.158 real 0m12.869s 00:08:59.158 user 0m29.257s 00:08:59.158 sys 0m2.882s 00:08:59.158 13:01:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.158 13:01:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.158 ************************************ 00:08:59.158 END TEST nvmf_delete_subsystem 00:08:59.158 ************************************ 00:08:59.158 13:01:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:59.158 13:01:20 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:59.158 13:01:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:59.158 13:01:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.158 13:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:59.158 ************************************ 00:08:59.158 START TEST nvmf_ns_masking 00:08:59.158 ************************************ 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:59.158 * Looking for test storage... 00:08:59.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1f8fd472-5fe5-4565-8430-0dd8f467bcb2 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=334b560f-53f7-431e-a73e-84556930e07c 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:59.158 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dc4051e3-ba10-4a6d-82b2-68a8d6a6eaf1 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.159 13:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.061 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:01.062 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:01.062 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:01.062 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:01.062 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:09:01.062 00:09:01.062 --- 10.0.0.2 ping statistics --- 00:09:01.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.062 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:09:01.062 00:09:01.062 --- 10.0.0.1 ping statistics --- 00:09:01.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.062 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3768292 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3768292 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3768292 ']' 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.062 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:01.062 [2024-07-15 13:01:22.654302] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:09:01.062 [2024-07-15 13:01:22.654377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.062 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.062 [2024-07-15 13:01:22.719316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.321 [2024-07-15 13:01:22.828425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.321 [2024-07-15 13:01:22.828481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.321 [2024-07-15 13:01:22.828497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.321 [2024-07-15 13:01:22.828510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.321 [2024-07-15 13:01:22.828522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.321 [2024-07-15 13:01:22.828551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.321 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.321 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:01.321 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.321 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.321 13:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:01.321 13:01:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.321 13:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:01.578 [2024-07-15 13:01:23.197348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.578 13:01:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:01.578 13:01:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:01.578 13:01:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:01.835 Malloc1 00:09:02.093 13:01:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:02.351 Malloc2 00:09:02.351 13:01:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.608 13:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:02.865 13:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.122 [2024-07-15 13:01:24.650406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.122 13:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:03.122 13:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dc4051e3-ba10-4a6d-82b2-68a8d6a6eaf1 -a 10.0.0.2 -s 4420 -i 4 00:09:03.378 13:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.378 13:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.378 13:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.378 13:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:03.378 13:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:05.290 [ 0]:0x1 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:05.290 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:05.547 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44e75d4641ab428ea9afe0e89c78f143 00:09:05.547 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44e75d4641ab428ea9afe0e89c78f143 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.547 13:01:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:05.547 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:05.547 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:05.547 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:05.547 [ 0]:0x1 00:09:05.547 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:05.547 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44e75d4641ab428ea9afe0e89c78f143 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44e75d4641ab428ea9afe0e89c78f143 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:05.803 [ 1]:0x2 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:05.803 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e6563e72e7463fb26f3ee1a14eb800 00:09:05.804 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e6563e72e7463fb26f3ee1a14eb800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.804 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:05.804 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.804 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.061 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:06.318 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:06.318 13:01:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dc4051e3-ba10-4a6d-82b2-68a8d6a6eaf1 -a 10.0.0.2 -s 4420 -i 4 00:09:06.575 13:01:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:06.575 13:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.575 13:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.575 13:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:06.575 13:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:06.575 13:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.470 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:08.471 [ 0]:0x2 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.471 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.727 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e6563e72e7463fb26f3ee1a14eb800 00:09:08.728 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e6563e72e7463fb26f3ee1a14eb800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.728 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:08.986 [ 0]:0x1 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44e75d4641ab428ea9afe0e89c78f143 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44e75d4641ab428ea9afe0e89c78f143 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:08.986 [ 1]:0x2 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e6563e72e7463fb26f3ee1a14eb800 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e6563e72e7463fb26f3ee1a14eb800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.986 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:09.245 [ 0]:0x2 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e6563e72e7463fb26f3ee1a14eb800 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e6563e72e7463fb26f3ee1a14eb800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:09.245 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.503 13:01:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:09.761 13:01:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:09.761 13:01:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dc4051e3-ba10-4a6d-82b2-68a8d6a6eaf1 -a 10.0.0.2 -s 4420 -i 4 00:09:10.021 13:01:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:10.021 13:01:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.021 13:01:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.021 13:01:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:10.021 13:01:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:10.021 13:01:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:11.921 [ 0]:0x1 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44e75d4641ab428ea9afe0e89c78f143 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44e75d4641ab428ea9afe0e89c78f143 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:11.921 [ 1]:0x2 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:11.921 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.179 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e6563e72e7463fb26f3ee1a14eb800 00:09:12.179 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e6563e72e7463fb26f3ee1a14eb800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.179 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.437 13:01:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:12.437 [ 0]:0x2 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e6563e72e7463fb26f3ee1a14eb800 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e6563e72e7463fb26f3ee1a14eb800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:12.437 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:12.694 [2024-07-15 13:01:34.271217] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:12.694 request: 00:09:12.694 { 00:09:12.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.694 "nsid": 2, 00:09:12.694 "host": "nqn.2016-06.io.spdk:host1", 00:09:12.694 "method": "nvmf_ns_remove_host", 00:09:12.694 "req_id": 1 00:09:12.694 } 00:09:12.694 Got JSON-RPC error response 00:09:12.694 response: 00:09:12.694 { 00:09:12.694 "code": -32602, 00:09:12.694 "message": "Invalid parameters" 00:09:12.694 } 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:12.694 [ 0]:0x2 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e6563e72e7463fb26f3ee1a14eb800 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e6563e72e7463fb26f3ee1a14eb800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:12.694 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3769800 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3769800 /var/tmp/host.sock 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3769800 ']' 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:12.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.951 13:01:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:12.951 [2024-07-15 13:01:34.472518] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:09:12.951 [2024-07-15 13:01:34.472602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769800 ] 00:09:12.951 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.951 [2024-07-15 13:01:34.536101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.208 [2024-07-15 13:01:34.656534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.787 13:01:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.787 13:01:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:13.787 13:01:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.045 13:01:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.303 13:01:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1f8fd472-5fe5-4565-8430-0dd8f467bcb2 00:09:14.303 13:01:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:14.303 13:01:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1F8FD4725FE5456584300DD8F467BCB2 -i 00:09:14.560 13:01:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 334b560f-53f7-431e-a73e-84556930e07c 00:09:14.560 13:01:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:14.560 13:01:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 334B560F53F7431EA73E84556930E07C -i 00:09:15.123 13:01:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:15.123 13:01:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:15.380 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:15.380 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:15.945 nvme0n1 00:09:15.945 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:15.945 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:16.508 nvme1n2 00:09:16.508 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:16.508 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:16.508 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:16.508 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:16.508 13:01:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:16.787 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:16.787 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:16.787 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:16.787 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1f8fd472-5fe5-4565-8430-0dd8f467bcb2 == \1\f\8\f\d\4\7\2\-\5\f\e\5\-\4\5\6\5\-\8\4\3\0\-\0\d\d\8\f\4\6\7\b\c\b\2 ]] 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 334b560f-53f7-431e-a73e-84556930e07c == \3\3\4\b\5\6\0\f\-\5\3\f\7\-\4\3\1\e\-\a\7\3\e\-\8\4\5\5\6\9\3\0\e\0\7\c ]] 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3769800 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3769800 ']' 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3769800 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.044 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3769800 00:09:17.301 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:17.301 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:17.301 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3769800' 00:09:17.301 killing process with pid 3769800 00:09:17.301 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3769800 00:09:17.301 13:01:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3769800 00:09:17.558 13:01:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.123 rmmod nvme_tcp 00:09:18.123 rmmod nvme_fabrics 00:09:18.123 rmmod nvme_keyring 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3768292 ']' 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3768292 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3768292 ']' 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3768292 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3768292 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3768292' 00:09:18.123 killing process with pid 3768292 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3768292 00:09:18.123 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3768292 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.380 13:01:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.284 13:01:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.542 00:09:20.542 real 0m21.602s 00:09:20.542 user 0m29.024s 00:09:20.542 sys 0m4.071s 00:09:20.542 13:01:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.542 13:01:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 ************************************ 00:09:20.542 END TEST nvmf_ns_masking 00:09:20.542 ************************************ 00:09:20.542 13:01:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.542 13:01:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:20.542 13:01:42 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:20.542 13:01:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.542 13:01:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.542 13:01:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 ************************************ 00:09:20.542 START TEST nvmf_nvme_cli 00:09:20.542 ************************************ 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:20.542 * Looking for test storage... 00:09:20.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.542 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.543 13:01:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.441 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:22.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:22.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:22.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:22.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:22.701 00:09:22.701 --- 10.0.0.2 ping statistics --- 00:09:22.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.701 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:09:22.701 00:09:22.701 --- 10.0.0.1 ping statistics --- 00:09:22.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.701 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3772419 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3772419 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3772419 ']' 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.701 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:22.701 [2024-07-15 13:01:44.375921] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:09:22.702 [2024-07-15 13:01:44.376025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.959 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.960 [2024-07-15 13:01:44.443685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.960 [2024-07-15 13:01:44.554639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.960 [2024-07-15 13:01:44.554707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.960 [2024-07-15 13:01:44.554720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.960 [2024-07-15 13:01:44.554731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.960 [2024-07-15 13:01:44.554754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.960 [2024-07-15 13:01:44.554811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.960 [2024-07-15 13:01:44.554936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.960 [2024-07-15 13:01:44.554962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.960 [2024-07-15 13:01:44.554964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 [2024-07-15 13:01:44.707765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 Malloc0 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 Malloc1 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 [2024-07-15 13:01:44.793300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.218 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:23.475 00:09:23.475 Discovery Log Number of Records 2, Generation counter 2 00:09:23.475 =====Discovery Log Entry 0====== 00:09:23.475 trtype: tcp 00:09:23.475 adrfam: ipv4 00:09:23.475 subtype: current discovery subsystem 00:09:23.475 treq: not required 00:09:23.475 portid: 0 00:09:23.475 trsvcid: 4420 00:09:23.475 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:23.475 traddr: 10.0.0.2 00:09:23.475 eflags: explicit discovery connections, duplicate discovery information 00:09:23.475 sectype: none 00:09:23.475 =====Discovery Log Entry 1====== 00:09:23.475 trtype: tcp 00:09:23.475 adrfam: ipv4 00:09:23.475 subtype: nvme subsystem 00:09:23.475 treq: not required 00:09:23.475 portid: 0 00:09:23.475 trsvcid: 4420 00:09:23.475 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:23.475 traddr: 10.0.0.2 00:09:23.475 eflags: none 00:09:23.475 sectype: none 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.475 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:23.476 13:01:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.045 13:01:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:24.045 13:01:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.045 13:01:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.045 13:01:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:24.045 13:01:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:24.045 13:01:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:26.570 /dev/nvme0n1 ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:26.570 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.571 rmmod nvme_tcp 00:09:26.571 rmmod nvme_fabrics 00:09:26.571 rmmod nvme_keyring 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3772419 ']' 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3772419 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3772419 ']' 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3772419 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3772419 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3772419' 00:09:26.571 killing process with pid 3772419 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3772419 00:09:26.571 13:01:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3772419 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.571 13:01:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.104 13:01:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:29.104 00:09:29.104 real 0m8.204s 00:09:29.104 user 0m14.819s 00:09:29.104 sys 0m2.247s 00:09:29.104 13:01:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.104 13:01:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:29.104 ************************************ 00:09:29.104 END TEST nvmf_nvme_cli 00:09:29.104 ************************************ 00:09:29.104 13:01:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:29.104 13:01:50 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:29.104 13:01:50 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:29.104 13:01:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.104 13:01:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.104 13:01:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.104 ************************************ 00:09:29.104 START TEST nvmf_vfio_user 00:09:29.104 ************************************ 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:29.104 * Looking for test storage... 00:09:29.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3773228 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3773228' 00:09:29.104 Process pid: 3773228 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3773228 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3773228 ']' 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.104 13:01:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:29.104 [2024-07-15 13:01:50.408532] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:09:29.104 [2024-07-15 13:01:50.408625] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.104 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.104 [2024-07-15 13:01:50.471625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.104 [2024-07-15 13:01:50.593339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.104 [2024-07-15 13:01:50.593396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.105 [2024-07-15 13:01:50.593413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.105 [2024-07-15 13:01:50.593425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.105 [2024-07-15 13:01:50.593437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.105 [2024-07-15 13:01:50.593515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.105 [2024-07-15 13:01:50.593578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.105 [2024-07-15 13:01:50.593582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.105 [2024-07-15 13:01:50.593551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.037 13:01:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.037 13:01:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:30.037 13:01:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:30.970 13:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:30.970 13:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:30.970 13:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:30.970 13:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:30.970 13:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:30.970 13:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:31.228 Malloc1 00:09:31.228 13:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:31.793 13:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:31.793 13:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:32.358 13:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:32.359 13:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:32.359 13:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:32.359 Malloc2 00:09:32.359 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:32.617 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:32.875 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:33.132 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:33.132 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:33.132 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:33.132 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:33.132 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:33.132 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:33.132 [2024-07-15 13:01:54.817534] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:09:33.132 [2024-07-15 13:01:54.817580] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773775 ] 00:09:33.132 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.392 [2024-07-15 13:01:54.851259] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:33.392 [2024-07-15 13:01:54.860318] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:33.392 [2024-07-15 13:01:54.860350] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4e2f9fd000 00:09:33.392 [2024-07-15 13:01:54.861314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.862309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.863312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.864320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.865325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.866329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.867333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.868341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.392 [2024-07-15 13:01:54.869344] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:33.392 [2024-07-15 13:01:54.869366] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4e2f9f2000 00:09:33.392 [2024-07-15 13:01:54.870499] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:33.392 [2024-07-15 13:01:54.890256] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:33.392 [2024-07-15 13:01:54.890289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:33.392 [2024-07-15 13:01:54.892486] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:33.392 [2024-07-15 13:01:54.892536] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:33.392 [2024-07-15 13:01:54.892627] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:33.392 [2024-07-15 13:01:54.892655] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:33.392 [2024-07-15 13:01:54.892665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:33.392 [2024-07-15 13:01:54.893469] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:33.392 [2024-07-15 13:01:54.893489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:33.392 [2024-07-15 13:01:54.893502] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:33.392 [2024-07-15 13:01:54.894472] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:33.392 [2024-07-15 13:01:54.894491] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:33.392 [2024-07-15 13:01:54.894505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:33.392 [2024-07-15 13:01:54.895478] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:33.392 [2024-07-15 13:01:54.895496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:33.392 [2024-07-15 13:01:54.896482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:33.392 [2024-07-15 13:01:54.896500] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:33.392 [2024-07-15 13:01:54.896509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:33.392 [2024-07-15 13:01:54.896521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:33.392 [2024-07-15 13:01:54.896630] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:33.392 [2024-07-15 13:01:54.896639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:33.392 [2024-07-15 13:01:54.896647] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:33.392 [2024-07-15 13:01:54.897495] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:33.392 [2024-07-15 13:01:54.898499] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:33.392 [2024-07-15 13:01:54.899506] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:33.392 [2024-07-15 13:01:54.900504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:33.392 [2024-07-15 13:01:54.900595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:33.392 [2024-07-15 13:01:54.901518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:33.392 [2024-07-15 13:01:54.901536] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:33.392 [2024-07-15 13:01:54.901545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:33.392 [2024-07-15 13:01:54.901569] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:33.392 [2024-07-15 13:01:54.901583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:33.392 [2024-07-15 13:01:54.901608] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.392 [2024-07-15 13:01:54.901617] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.392 [2024-07-15 13:01:54.901636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.392 [2024-07-15 13:01:54.901699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:33.392 [2024-07-15 13:01:54.901716] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:33.392 [2024-07-15 13:01:54.901728] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:33.392 [2024-07-15 13:01:54.901736] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:33.392 [2024-07-15 13:01:54.901744] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:33.392 [2024-07-15 13:01:54.901751] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:33.392 [2024-07-15 13:01:54.901759] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:33.392 [2024-07-15 13:01:54.901766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:33.392 [2024-07-15 13:01:54.901779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:33.392 [2024-07-15 13:01:54.901793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:33.392 [2024-07-15 13:01:54.901814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:33.392 [2024-07-15 13:01:54.901835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.393 [2024-07-15 13:01:54.901848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.393 [2024-07-15 13:01:54.901889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.393 [2024-07-15 13:01:54.901904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.393 [2024-07-15 13:01:54.901913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.901929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.901945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.901960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.901971] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:33.393 [2024-07-15 13:01:54.901979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.901990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902116] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:33.393 [2024-07-15 13:01:54.902125] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:33.393 [2024-07-15 13:01:54.902134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902199] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:33.393 [2024-07-15 13:01:54.902215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902242] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.393 [2024-07-15 13:01:54.902250] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.393 [2024-07-15 13:01:54.902262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902342] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.393 [2024-07-15 13:01:54.902350] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.393 [2024-07-15 13:01:54.902359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902447] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:33.393 [2024-07-15 13:01:54.902454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:33.393 [2024-07-15 13:01:54.902462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:33.393 [2024-07-15 13:01:54.902489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902616] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:33.393 [2024-07-15 13:01:54.902626] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:33.393 [2024-07-15 13:01:54.902635] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:33.393 [2024-07-15 13:01:54.902642] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:33.393 [2024-07-15 13:01:54.902651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:33.393 [2024-07-15 13:01:54.902662] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:33.393 [2024-07-15 13:01:54.902670] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:33.393 [2024-07-15 13:01:54.902679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902690] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:33.393 [2024-07-15 13:01:54.902698] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.393 [2024-07-15 13:01:54.902706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902718] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:33.393 [2024-07-15 13:01:54.902726] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:33.393 [2024-07-15 13:01:54.902735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:33.393 [2024-07-15 13:01:54.902746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:33.393 [2024-07-15 13:01:54.902798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:33.393 ===================================================== 00:09:33.393 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:33.393 ===================================================== 00:09:33.393 Controller Capabilities/Features 00:09:33.393 ================================ 00:09:33.393 Vendor ID: 4e58 00:09:33.393 Subsystem Vendor ID: 4e58 00:09:33.393 Serial Number: SPDK1 00:09:33.393 Model Number: SPDK bdev Controller 00:09:33.393 Firmware Version: 24.09 00:09:33.393 Recommended Arb Burst: 6 00:09:33.393 IEEE OUI Identifier: 8d 6b 50 00:09:33.393 Multi-path I/O 00:09:33.393 May have multiple subsystem ports: Yes 00:09:33.393 May have multiple controllers: Yes 00:09:33.393 Associated with SR-IOV VF: No 00:09:33.393 Max Data Transfer Size: 131072 00:09:33.393 Max Number of Namespaces: 32 00:09:33.393 Max Number of I/O Queues: 127 00:09:33.393 NVMe Specification Version (VS): 1.3 00:09:33.393 NVMe Specification Version (Identify): 1.3 00:09:33.393 Maximum Queue Entries: 256 00:09:33.393 Contiguous Queues Required: Yes 00:09:33.393 Arbitration Mechanisms Supported 00:09:33.393 Weighted Round Robin: Not Supported 00:09:33.393 Vendor Specific: Not Supported 00:09:33.393 Reset Timeout: 15000 ms 00:09:33.393 Doorbell Stride: 4 bytes 00:09:33.393 NVM Subsystem Reset: Not Supported 00:09:33.393 Command Sets Supported 00:09:33.393 NVM Command Set: Supported 00:09:33.393 Boot Partition: Not Supported 00:09:33.393 Memory Page Size Minimum: 4096 bytes 00:09:33.393 Memory Page Size Maximum: 4096 bytes 00:09:33.393 Persistent Memory Region: Not Supported 00:09:33.393 Optional Asynchronous Events Supported 00:09:33.393 Namespace Attribute Notices: Supported 00:09:33.393 Firmware Activation Notices: Not Supported 00:09:33.393 ANA Change Notices: Not Supported 00:09:33.393 PLE Aggregate Log Change Notices: Not Supported 00:09:33.393 LBA Status Info Alert Notices: Not Supported 00:09:33.393 EGE Aggregate Log Change Notices: Not Supported 00:09:33.393 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.393 Zone Descriptor Change Notices: Not Supported 00:09:33.393 Discovery Log Change Notices: Not Supported 00:09:33.394 Controller Attributes 00:09:33.394 128-bit Host Identifier: Supported 00:09:33.394 Non-Operational Permissive Mode: Not Supported 00:09:33.394 NVM Sets: Not Supported 00:09:33.394 Read Recovery Levels: Not Supported 00:09:33.394 Endurance Groups: Not Supported 00:09:33.394 Predictable Latency Mode: Not Supported 00:09:33.394 Traffic Based Keep ALive: Not Supported 00:09:33.394 Namespace Granularity: Not Supported 00:09:33.394 SQ Associations: Not Supported 00:09:33.394 UUID List: Not Supported 00:09:33.394 Multi-Domain Subsystem: Not Supported 00:09:33.394 Fixed Capacity Management: Not Supported 00:09:33.394 Variable Capacity Management: Not Supported 00:09:33.394 Delete Endurance Group: Not Supported 00:09:33.394 Delete NVM Set: Not Supported 00:09:33.394 Extended LBA Formats Supported: Not Supported 00:09:33.394 Flexible Data Placement Supported: Not Supported 00:09:33.394 00:09:33.394 Controller Memory Buffer Support 00:09:33.394 ================================ 00:09:33.394 Supported: No 00:09:33.394 00:09:33.394 Persistent Memory Region Support 00:09:33.394 ================================ 00:09:33.394 Supported: No 00:09:33.394 00:09:33.394 Admin Command Set Attributes 00:09:33.394 ============================ 00:09:33.394 Security Send/Receive: Not Supported 00:09:33.394 Format NVM: Not Supported 00:09:33.394 Firmware Activate/Download: Not Supported 00:09:33.394 Namespace Management: Not Supported 00:09:33.394 Device Self-Test: Not Supported 00:09:33.394 Directives: Not Supported 00:09:33.394 NVMe-MI: Not Supported 00:09:33.394 Virtualization Management: Not Supported 00:09:33.394 Doorbell Buffer Config: Not Supported 00:09:33.394 Get LBA Status Capability: Not Supported 00:09:33.394 Command & Feature Lockdown Capability: Not Supported 00:09:33.394 Abort Command Limit: 4 00:09:33.394 Async Event Request Limit: 4 00:09:33.394 Number of Firmware Slots: N/A 00:09:33.394 Firmware Slot 1 Read-Only: N/A 00:09:33.394 Firmware Activation Without Reset: N/A 00:09:33.394 Multiple Update Detection Support: N/A 00:09:33.394 Firmware Update Granularity: No Information Provided 00:09:33.394 Per-Namespace SMART Log: No 00:09:33.394 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.394 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:33.394 Command Effects Log Page: Supported 00:09:33.394 Get Log Page Extended Data: Supported 00:09:33.394 Telemetry Log Pages: Not Supported 00:09:33.394 Persistent Event Log Pages: Not Supported 00:09:33.394 Supported Log Pages Log Page: May Support 00:09:33.394 Commands Supported & Effects Log Page: Not Supported 00:09:33.394 Feature Identifiers & Effects Log Page:May Support 00:09:33.394 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.394 Data Area 4 for Telemetry Log: Not Supported 00:09:33.394 Error Log Page Entries Supported: 128 00:09:33.394 Keep Alive: Supported 00:09:33.394 Keep Alive Granularity: 10000 ms 00:09:33.394 00:09:33.394 NVM Command Set Attributes 00:09:33.394 ========================== 00:09:33.394 Submission Queue Entry Size 00:09:33.394 Max: 64 00:09:33.394 Min: 64 00:09:33.394 Completion Queue Entry Size 00:09:33.394 Max: 16 00:09:33.394 Min: 16 00:09:33.394 Number of Namespaces: 32 00:09:33.394 Compare Command: Supported 00:09:33.394 Write Uncorrectable Command: Not Supported 00:09:33.394 Dataset Management Command: Supported 00:09:33.394 Write Zeroes Command: Supported 00:09:33.394 Set Features Save Field: Not Supported 00:09:33.394 Reservations: Not Supported 00:09:33.394 Timestamp: Not Supported 00:09:33.394 Copy: Supported 00:09:33.394 Volatile Write Cache: Present 00:09:33.394 Atomic Write Unit (Normal): 1 00:09:33.394 Atomic Write Unit (PFail): 1 00:09:33.394 Atomic Compare & Write Unit: 1 00:09:33.394 Fused Compare & Write: Supported 00:09:33.394 Scatter-Gather List 00:09:33.394 SGL Command Set: Supported (Dword aligned) 00:09:33.394 SGL Keyed: Not Supported 00:09:33.394 SGL Bit Bucket Descriptor: Not Supported 00:09:33.394 SGL Metadata Pointer: Not Supported 00:09:33.394 Oversized SGL: Not Supported 00:09:33.394 SGL Metadata Address: Not Supported 00:09:33.394 SGL Offset: Not Supported 00:09:33.394 Transport SGL Data Block: Not Supported 00:09:33.394 Replay Protected Memory Block: Not Supported 00:09:33.394 00:09:33.394 Firmware Slot Information 00:09:33.394 ========================= 00:09:33.394 Active slot: 1 00:09:33.394 Slot 1 Firmware Revision: 24.09 00:09:33.394 00:09:33.394 00:09:33.394 Commands Supported and Effects 00:09:33.394 ============================== 00:09:33.394 Admin Commands 00:09:33.394 -------------- 00:09:33.394 Get Log Page (02h): Supported 00:09:33.394 Identify (06h): Supported 00:09:33.394 Abort (08h): Supported 00:09:33.394 Set Features (09h): Supported 00:09:33.394 Get Features (0Ah): Supported 00:09:33.394 Asynchronous Event Request (0Ch): Supported 00:09:33.394 Keep Alive (18h): Supported 00:09:33.394 I/O Commands 00:09:33.394 ------------ 00:09:33.394 Flush (00h): Supported LBA-Change 00:09:33.394 Write (01h): Supported LBA-Change 00:09:33.394 Read (02h): Supported 00:09:33.394 Compare (05h): Supported 00:09:33.394 Write Zeroes (08h): Supported LBA-Change 00:09:33.394 Dataset Management (09h): Supported LBA-Change 00:09:33.394 Copy (19h): Supported LBA-Change 00:09:33.394 00:09:33.394 Error Log 00:09:33.394 ========= 00:09:33.394 00:09:33.394 Arbitration 00:09:33.394 =========== 00:09:33.394 Arbitration Burst: 1 00:09:33.394 00:09:33.394 Power Management 00:09:33.394 ================ 00:09:33.394 Number of Power States: 1 00:09:33.394 Current Power State: Power State #0 00:09:33.394 Power State #0: 00:09:33.394 Max Power: 0.00 W 00:09:33.394 Non-Operational State: Operational 00:09:33.394 Entry Latency: Not Reported 00:09:33.394 Exit Latency: Not Reported 00:09:33.394 Relative Read Throughput: 0 00:09:33.394 Relative Read Latency: 0 00:09:33.394 Relative Write Throughput: 0 00:09:33.394 Relative Write Latency: 0 00:09:33.394 Idle Power: Not Reported 00:09:33.394 Active Power: Not Reported 00:09:33.394 Non-Operational Permissive Mode: Not Supported 00:09:33.394 00:09:33.394 Health Information 00:09:33.394 ================== 00:09:33.394 Critical Warnings: 00:09:33.394 Available Spare Space: OK 00:09:33.394 Temperature: OK 00:09:33.394 Device Reliability: OK 00:09:33.394 Read Only: No 00:09:33.394 Volatile Memory Backup: OK 00:09:33.394 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:33.394 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:33.394 Available Spare: 0% 00:09:33.394 Available Sp[2024-07-15 13:01:54.902958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:33.394 [2024-07-15 13:01:54.902975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:33.394 [2024-07-15 13:01:54.903022] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:33.394 [2024-07-15 13:01:54.903040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.394 [2024-07-15 13:01:54.903051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.394 [2024-07-15 13:01:54.903062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.394 [2024-07-15 13:01:54.903072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.394 [2024-07-15 13:01:54.903525] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:33.394 [2024-07-15 13:01:54.903546] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:33.394 [2024-07-15 13:01:54.904532] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:33.394 [2024-07-15 13:01:54.904606] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:33.394 [2024-07-15 13:01:54.904619] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:33.394 [2024-07-15 13:01:54.905535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:33.394 [2024-07-15 13:01:54.905557] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:33.394 [2024-07-15 13:01:54.905611] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:33.394 [2024-07-15 13:01:54.910887] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:33.394 are Threshold: 0% 00:09:33.394 Life Percentage Used: 0% 00:09:33.394 Data Units Read: 0 00:09:33.394 Data Units Written: 0 00:09:33.394 Host Read Commands: 0 00:09:33.394 Host Write Commands: 0 00:09:33.394 Controller Busy Time: 0 minutes 00:09:33.394 Power Cycles: 0 00:09:33.394 Power On Hours: 0 hours 00:09:33.394 Unsafe Shutdowns: 0 00:09:33.394 Unrecoverable Media Errors: 0 00:09:33.394 Lifetime Error Log Entries: 0 00:09:33.394 Warning Temperature Time: 0 minutes 00:09:33.394 Critical Temperature Time: 0 minutes 00:09:33.394 00:09:33.394 Number of Queues 00:09:33.394 ================ 00:09:33.394 Number of I/O Submission Queues: 127 00:09:33.394 Number of I/O Completion Queues: 127 00:09:33.394 00:09:33.394 Active Namespaces 00:09:33.394 ================= 00:09:33.394 Namespace ID:1 00:09:33.394 Error Recovery Timeout: Unlimited 00:09:33.394 Command Set Identifier: NVM (00h) 00:09:33.394 Deallocate: Supported 00:09:33.395 Deallocated/Unwritten Error: Not Supported 00:09:33.395 Deallocated Read Value: Unknown 00:09:33.395 Deallocate in Write Zeroes: Not Supported 00:09:33.395 Deallocated Guard Field: 0xFFFF 00:09:33.395 Flush: Supported 00:09:33.395 Reservation: Supported 00:09:33.395 Namespace Sharing Capabilities: Multiple Controllers 00:09:33.395 Size (in LBAs): 131072 (0GiB) 00:09:33.395 Capacity (in LBAs): 131072 (0GiB) 00:09:33.395 Utilization (in LBAs): 131072 (0GiB) 00:09:33.395 NGUID: 1A13905382FB4E13B6C43AF2FC947B56 00:09:33.395 UUID: 1a139053-82fb-4e13-b6c4-3af2fc947b56 00:09:33.395 Thin Provisioning: Not Supported 00:09:33.395 Per-NS Atomic Units: Yes 00:09:33.395 Atomic Boundary Size (Normal): 0 00:09:33.395 Atomic Boundary Size (PFail): 0 00:09:33.395 Atomic Boundary Offset: 0 00:09:33.395 Maximum Single Source Range Length: 65535 00:09:33.395 Maximum Copy Length: 65535 00:09:33.395 Maximum Source Range Count: 1 00:09:33.395 NGUID/EUI64 Never Reused: No 00:09:33.395 Namespace Write Protected: No 00:09:33.395 Number of LBA Formats: 1 00:09:33.395 Current LBA Format: LBA Format #00 00:09:33.395 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.395 00:09:33.395 13:01:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:33.395 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.653 [2024-07-15 13:01:55.142726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:38.916 Initializing NVMe Controllers 00:09:38.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:38.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:38.916 Initialization complete. Launching workers. 00:09:38.916 ======================================================== 00:09:38.916 Latency(us) 00:09:38.916 Device Information : IOPS MiB/s Average min max 00:09:38.916 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34384.10 134.31 3722.10 1175.51 7663.87 00:09:38.916 ======================================================== 00:09:38.916 Total : 34384.10 134.31 3722.10 1175.51 7663.87 00:09:38.916 00:09:38.916 [2024-07-15 13:02:00.165466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:38.916 13:02:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:38.916 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.916 [2024-07-15 13:02:00.406650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:44.182 Initializing NVMe Controllers 00:09:44.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:44.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:44.182 Initialization complete. Launching workers. 00:09:44.182 ======================================================== 00:09:44.182 Latency(us) 00:09:44.182 Device Information : IOPS MiB/s Average min max 00:09:44.182 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.16 62.70 7984.39 6988.71 11989.85 00:09:44.182 ======================================================== 00:09:44.182 Total : 16051.16 62.70 7984.39 6988.71 11989.85 00:09:44.182 00:09:44.182 [2024-07-15 13:02:05.443652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:44.182 13:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:44.182 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.182 [2024-07-15 13:02:05.664806] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:49.450 [2024-07-15 13:02:10.730222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:49.450 Initializing NVMe Controllers 00:09:49.450 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:49.450 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:49.450 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:49.450 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:49.450 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:49.450 Initialization complete. Launching workers. 00:09:49.450 Starting thread on core 2 00:09:49.450 Starting thread on core 3 00:09:49.450 Starting thread on core 1 00:09:49.450 13:02:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:49.450 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.450 [2024-07-15 13:02:11.033359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.648 [2024-07-15 13:02:14.673174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.648 Initializing NVMe Controllers 00:09:53.648 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.648 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.648 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:53.648 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:53.648 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:53.648 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:53.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:53.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:53.648 Initialization complete. Launching workers. 00:09:53.648 Starting thread on core 1 with urgent priority queue 00:09:53.648 Starting thread on core 2 with urgent priority queue 00:09:53.648 Starting thread on core 3 with urgent priority queue 00:09:53.648 Starting thread on core 0 with urgent priority queue 00:09:53.648 SPDK bdev Controller (SPDK1 ) core 0: 5009.00 IO/s 19.96 secs/100000 ios 00:09:53.648 SPDK bdev Controller (SPDK1 ) core 1: 5507.00 IO/s 18.16 secs/100000 ios 00:09:53.648 SPDK bdev Controller (SPDK1 ) core 2: 5428.33 IO/s 18.42 secs/100000 ios 00:09:53.648 SPDK bdev Controller (SPDK1 ) core 3: 5377.00 IO/s 18.60 secs/100000 ios 00:09:53.648 ======================================================== 00:09:53.648 00:09:53.648 13:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:53.648 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.648 [2024-07-15 13:02:14.969485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.648 Initializing NVMe Controllers 00:09:53.648 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.648 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.648 Namespace ID: 1 size: 0GB 00:09:53.648 Initialization complete. 00:09:53.648 INFO: using host memory buffer for IO 00:09:53.648 Hello world! 00:09:53.648 [2024-07-15 13:02:15.004135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.648 13:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:53.648 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.648 [2024-07-15 13:02:15.301326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:55.028 Initializing NVMe Controllers 00:09:55.028 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:55.028 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:55.028 Initialization complete. Launching workers. 00:09:55.028 submit (in ns) avg, min, max = 7260.0, 3526.7, 4016761.1 00:09:55.028 complete (in ns) avg, min, max = 28484.8, 2065.6, 5995758.9 00:09:55.028 00:09:55.028 Submit histogram 00:09:55.028 ================ 00:09:55.028 Range in us Cumulative Count 00:09:55.028 3.508 - 3.532: 0.0973% ( 13) 00:09:55.028 3.532 - 3.556: 0.7182% ( 83) 00:09:55.028 3.556 - 3.579: 2.7007% ( 265) 00:09:55.028 3.579 - 3.603: 6.9350% ( 566) 00:09:55.028 3.603 - 3.627: 14.3188% ( 987) 00:09:55.028 3.627 - 3.650: 23.8423% ( 1273) 00:09:55.028 3.650 - 3.674: 32.6102% ( 1172) 00:09:55.028 3.674 - 3.698: 39.8294% ( 965) 00:09:55.028 3.698 - 3.721: 46.8093% ( 933) 00:09:55.028 3.721 - 3.745: 52.6670% ( 783) 00:09:55.028 3.745 - 3.769: 57.3128% ( 621) 00:09:55.028 3.769 - 3.793: 61.4124% ( 548) 00:09:55.028 3.793 - 3.816: 64.5246% ( 416) 00:09:55.028 3.816 - 3.840: 68.0482% ( 471) 00:09:55.028 3.840 - 3.864: 71.9384% ( 520) 00:09:55.028 3.864 - 3.887: 76.0829% ( 554) 00:09:55.028 3.887 - 3.911: 79.7187% ( 486) 00:09:55.028 3.911 - 3.935: 82.7486% ( 405) 00:09:55.028 3.935 - 3.959: 85.1874% ( 326) 00:09:55.028 3.959 - 3.982: 87.0502% ( 249) 00:09:55.028 3.982 - 4.006: 88.9504% ( 254) 00:09:55.028 4.006 - 4.030: 90.2446% ( 173) 00:09:55.028 4.030 - 4.053: 91.4042% ( 155) 00:09:55.028 4.053 - 4.077: 92.4815% ( 144) 00:09:55.028 4.077 - 4.101: 93.2146% ( 98) 00:09:55.028 4.101 - 4.124: 94.0301% ( 109) 00:09:55.028 4.124 - 4.148: 94.5762% ( 73) 00:09:55.028 4.148 - 4.172: 95.1148% ( 72) 00:09:55.028 4.172 - 4.196: 95.4814% ( 49) 00:09:55.028 4.196 - 4.219: 95.7956% ( 42) 00:09:55.028 4.219 - 4.243: 96.0051% ( 28) 00:09:55.028 4.243 - 4.267: 96.2445% ( 32) 00:09:55.028 4.267 - 4.290: 96.3866% ( 19) 00:09:55.028 4.290 - 4.314: 96.5138% ( 17) 00:09:55.028 4.314 - 4.338: 96.6335% ( 16) 00:09:55.028 4.338 - 4.361: 96.7607% ( 17) 00:09:55.028 4.361 - 4.385: 96.8280% ( 9) 00:09:55.028 4.385 - 4.409: 96.8654% ( 5) 00:09:55.028 4.409 - 4.433: 96.8953% ( 4) 00:09:55.028 4.433 - 4.456: 96.9178% ( 3) 00:09:55.028 4.456 - 4.480: 96.9702% ( 7) 00:09:55.028 4.480 - 4.504: 97.0001% ( 4) 00:09:55.028 4.504 - 4.527: 97.0150% ( 2) 00:09:55.028 4.527 - 4.551: 97.0450% ( 4) 00:09:55.028 4.551 - 4.575: 97.0749% ( 4) 00:09:55.028 4.575 - 4.599: 97.0973% ( 3) 00:09:55.028 4.599 - 4.622: 97.1198% ( 3) 00:09:55.028 4.622 - 4.646: 97.1273% ( 1) 00:09:55.028 4.646 - 4.670: 97.1572% ( 4) 00:09:55.028 4.670 - 4.693: 97.1796% ( 3) 00:09:55.028 4.693 - 4.717: 97.2170% ( 5) 00:09:55.028 4.717 - 4.741: 97.2993% ( 11) 00:09:55.028 4.741 - 4.764: 97.3442% ( 6) 00:09:55.028 4.764 - 4.788: 97.3741% ( 4) 00:09:55.028 4.788 - 4.812: 97.4115% ( 5) 00:09:55.028 4.812 - 4.836: 97.4789% ( 9) 00:09:55.028 4.836 - 4.859: 97.5238% ( 6) 00:09:55.028 4.859 - 4.883: 97.5761% ( 7) 00:09:55.028 4.883 - 4.907: 97.6210% ( 6) 00:09:55.028 4.907 - 4.930: 97.6584% ( 5) 00:09:55.028 4.930 - 4.954: 97.6659% ( 1) 00:09:55.028 4.954 - 4.978: 97.6883% ( 3) 00:09:55.028 4.978 - 5.001: 97.7332% ( 6) 00:09:55.028 5.001 - 5.025: 97.7631% ( 4) 00:09:55.028 5.025 - 5.049: 97.8006% ( 5) 00:09:55.028 5.049 - 5.073: 97.8529% ( 7) 00:09:55.028 5.073 - 5.096: 97.9053% ( 7) 00:09:55.028 5.096 - 5.120: 97.9128% ( 1) 00:09:55.028 5.120 - 5.144: 97.9277% ( 2) 00:09:55.028 5.144 - 5.167: 97.9651% ( 5) 00:09:55.028 5.167 - 5.191: 97.9801% ( 2) 00:09:55.028 5.191 - 5.215: 97.9951% ( 2) 00:09:55.028 5.239 - 5.262: 98.0025% ( 1) 00:09:55.028 5.262 - 5.286: 98.0175% ( 2) 00:09:55.028 5.286 - 5.310: 98.0325% ( 2) 00:09:55.028 5.310 - 5.333: 98.0474% ( 2) 00:09:55.028 5.333 - 5.357: 98.0549% ( 1) 00:09:55.028 5.381 - 5.404: 98.0848% ( 4) 00:09:55.028 5.547 - 5.570: 98.0923% ( 1) 00:09:55.028 5.570 - 5.594: 98.0998% ( 1) 00:09:55.028 5.594 - 5.618: 98.1073% ( 1) 00:09:55.028 5.665 - 5.689: 98.1148% ( 1) 00:09:55.028 5.689 - 5.713: 98.1222% ( 1) 00:09:55.028 5.784 - 5.807: 98.1297% ( 1) 00:09:55.028 5.879 - 5.902: 98.1372% ( 1) 00:09:55.028 5.950 - 5.973: 98.1447% ( 1) 00:09:55.028 5.997 - 6.021: 98.1522% ( 1) 00:09:55.028 6.116 - 6.163: 98.1671% ( 2) 00:09:55.028 6.163 - 6.210: 98.1746% ( 1) 00:09:55.028 6.258 - 6.305: 98.1896% ( 2) 00:09:55.028 6.353 - 6.400: 98.1971% ( 1) 00:09:55.028 6.400 - 6.447: 98.2120% ( 2) 00:09:55.028 6.495 - 6.542: 98.2195% ( 1) 00:09:55.028 6.590 - 6.637: 98.2270% ( 1) 00:09:55.028 6.779 - 6.827: 98.2345% ( 1) 00:09:55.028 6.827 - 6.874: 98.2494% ( 2) 00:09:55.028 6.969 - 7.016: 98.2569% ( 1) 00:09:55.028 7.016 - 7.064: 98.2644% ( 1) 00:09:55.028 7.111 - 7.159: 98.2793% ( 2) 00:09:55.028 7.206 - 7.253: 98.2868% ( 1) 00:09:55.028 7.253 - 7.301: 98.3168% ( 4) 00:09:55.028 7.301 - 7.348: 98.3317% ( 2) 00:09:55.028 7.348 - 7.396: 98.3542% ( 3) 00:09:55.028 7.443 - 7.490: 98.3841% ( 4) 00:09:55.028 7.490 - 7.538: 98.3990% ( 2) 00:09:55.028 7.538 - 7.585: 98.4215% ( 3) 00:09:55.028 7.585 - 7.633: 98.4364% ( 2) 00:09:55.028 7.633 - 7.680: 98.4439% ( 1) 00:09:55.028 7.680 - 7.727: 98.4589% ( 2) 00:09:55.028 7.822 - 7.870: 98.4664% ( 1) 00:09:55.028 7.870 - 7.917: 98.4813% ( 2) 00:09:55.028 7.917 - 7.964: 98.4888% ( 1) 00:09:55.028 7.964 - 8.012: 98.4963% ( 1) 00:09:55.028 8.012 - 8.059: 98.5113% ( 2) 00:09:55.028 8.107 - 8.154: 98.5187% ( 1) 00:09:55.028 8.154 - 8.201: 98.5337% ( 2) 00:09:55.028 8.249 - 8.296: 98.5487% ( 2) 00:09:55.028 8.296 - 8.344: 98.5636% ( 2) 00:09:55.028 8.391 - 8.439: 98.5936% ( 4) 00:09:55.028 8.486 - 8.533: 98.6010% ( 1) 00:09:55.028 8.723 - 8.770: 98.6160% ( 2) 00:09:55.028 8.818 - 8.865: 98.6235% ( 1) 00:09:55.028 8.865 - 8.913: 98.6310% ( 1) 00:09:55.028 8.913 - 8.960: 98.6384% ( 1) 00:09:55.028 8.960 - 9.007: 98.6459% ( 1) 00:09:55.028 9.102 - 9.150: 98.6534% ( 1) 00:09:55.028 9.150 - 9.197: 98.6609% ( 1) 00:09:55.028 9.387 - 9.434: 98.6684% ( 1) 00:09:55.028 9.576 - 9.624: 98.6758% ( 1) 00:09:55.028 9.813 - 9.861: 98.6833% ( 1) 00:09:55.028 10.098 - 10.145: 98.6908% ( 1) 00:09:55.028 10.193 - 10.240: 98.6983% ( 1) 00:09:55.028 10.240 - 10.287: 98.7058% ( 1) 00:09:55.028 10.382 - 10.430: 98.7132% ( 1) 00:09:55.028 10.430 - 10.477: 98.7207% ( 1) 00:09:55.028 10.572 - 10.619: 98.7282% ( 1) 00:09:55.028 10.667 - 10.714: 98.7357% ( 1) 00:09:55.028 11.188 - 11.236: 98.7507% ( 2) 00:09:55.028 11.378 - 11.425: 98.7581% ( 1) 00:09:55.028 11.425 - 11.473: 98.7656% ( 1) 00:09:55.028 11.710 - 11.757: 98.7731% ( 1) 00:09:55.028 11.757 - 11.804: 98.7881% ( 2) 00:09:55.028 11.804 - 11.852: 98.7955% ( 1) 00:09:55.028 11.947 - 11.994: 98.8105% ( 2) 00:09:55.028 12.041 - 12.089: 98.8255% ( 2) 00:09:55.028 12.089 - 12.136: 98.8329% ( 1) 00:09:55.028 12.136 - 12.231: 98.8479% ( 2) 00:09:55.028 12.231 - 12.326: 98.8554% ( 1) 00:09:55.028 12.421 - 12.516: 98.8778% ( 3) 00:09:55.028 12.516 - 12.610: 98.8853% ( 1) 00:09:55.028 12.610 - 12.705: 98.8928% ( 1) 00:09:55.028 12.800 - 12.895: 98.9003% ( 1) 00:09:55.028 12.895 - 12.990: 98.9078% ( 1) 00:09:55.028 13.084 - 13.179: 98.9152% ( 1) 00:09:55.028 13.179 - 13.274: 98.9227% ( 1) 00:09:55.028 13.274 - 13.369: 98.9302% ( 1) 00:09:55.028 13.369 - 13.464: 98.9377% ( 1) 00:09:55.028 13.464 - 13.559: 98.9452% ( 1) 00:09:55.028 13.559 - 13.653: 98.9526% ( 1) 00:09:55.028 13.748 - 13.843: 98.9601% ( 1) 00:09:55.028 13.843 - 13.938: 98.9676% ( 1) 00:09:55.028 13.938 - 14.033: 98.9751% ( 1) 00:09:55.028 14.127 - 14.222: 98.9826% ( 1) 00:09:55.028 14.222 - 14.317: 98.9975% ( 2) 00:09:55.028 14.412 - 14.507: 99.0050% ( 1) 00:09:55.028 14.791 - 14.886: 99.0125% ( 1) 00:09:55.028 17.067 - 17.161: 99.0200% ( 1) 00:09:55.028 17.256 - 17.351: 99.0424% ( 3) 00:09:55.028 17.351 - 17.446: 99.0499% ( 1) 00:09:55.028 17.446 - 17.541: 99.0798% ( 4) 00:09:55.028 17.541 - 17.636: 99.0948% ( 2) 00:09:55.028 17.636 - 17.730: 99.1472% ( 7) 00:09:55.028 17.730 - 17.825: 99.1771% ( 4) 00:09:55.028 17.825 - 17.920: 99.2070% ( 4) 00:09:55.028 17.920 - 18.015: 99.2519% ( 6) 00:09:55.028 18.015 - 18.110: 99.2968% ( 6) 00:09:55.029 18.110 - 18.204: 99.3491% ( 7) 00:09:55.029 18.204 - 18.299: 99.4614% ( 15) 00:09:55.029 18.299 - 18.394: 99.5137% ( 7) 00:09:55.029 18.394 - 18.489: 99.5736% ( 8) 00:09:55.029 18.489 - 18.584: 99.6409% ( 9) 00:09:55.029 18.584 - 18.679: 99.6634% ( 3) 00:09:55.029 18.679 - 18.773: 99.7082% ( 6) 00:09:55.029 18.773 - 18.868: 99.7531% ( 6) 00:09:55.029 18.868 - 18.963: 99.8055% ( 7) 00:09:55.029 18.963 - 19.058: 99.8130% ( 1) 00:09:55.029 19.058 - 19.153: 99.8279% ( 2) 00:09:55.029 19.247 - 19.342: 99.8429% ( 2) 00:09:55.029 19.342 - 19.437: 99.8504% ( 1) 00:09:55.029 19.532 - 19.627: 99.8653% ( 2) 00:09:55.029 19.627 - 19.721: 99.8728% ( 1) 00:09:55.029 19.721 - 19.816: 99.8878% ( 2) 00:09:55.029 20.954 - 21.049: 99.8953% ( 1) 00:09:55.029 22.566 - 22.661: 99.9027% ( 1) 00:09:55.029 23.419 - 23.514: 99.9102% ( 1) 00:09:55.029 29.582 - 29.772: 99.9177% ( 1) 00:09:55.029 3980.705 - 4004.978: 99.9776% ( 8) 00:09:55.029 4004.978 - 4029.250: 100.0000% ( 3) 00:09:55.029 00:09:55.029 Complete histogram 00:09:55.029 ================== 00:09:55.029 Range in us Cumulative Count 00:09:55.029 2.062 - 2.074: 7.4437% ( 995) 00:09:55.029 2.074 - 2.086: 44.1685% ( 4909) 00:09:55.029 2.086 - 2.098: 47.7295% ( 476) 00:09:55.029 2.098 - 2.110: 53.2580% ( 739) 00:09:55.029 2.110 - 2.121: 59.3776% ( 818) 00:09:55.029 2.121 - 2.133: 60.2753% ( 120) 00:09:55.029 2.133 - 2.145: 67.8761% ( 1016) 00:09:55.029 2.145 - 2.157: 75.8136% ( 1061) 00:09:55.029 2.157 - 2.169: 76.7861% ( 130) 00:09:55.029 2.169 - 2.181: 79.7412% ( 395) 00:09:55.029 2.181 - 2.193: 81.8658% ( 284) 00:09:55.029 2.193 - 2.204: 82.4643% ( 80) 00:09:55.029 2.204 - 2.216: 85.2921% ( 378) 00:09:55.029 2.216 - 2.228: 88.8307% ( 473) 00:09:55.029 2.228 - 2.240: 90.7309% ( 254) 00:09:55.029 2.240 - 2.252: 92.0625% ( 178) 00:09:55.029 2.252 - 2.264: 92.9453% ( 118) 00:09:55.029 2.264 - 2.276: 93.1772% ( 31) 00:09:55.029 2.276 - 2.287: 93.5588% ( 51) 00:09:55.029 2.287 - 2.299: 94.0001% ( 59) 00:09:55.029 2.299 - 2.311: 94.6809% ( 91) 00:09:55.029 2.311 - 2.323: 95.0026% ( 43) 00:09:55.029 2.323 - 2.335: 95.0924% ( 12) 00:09:55.029 2.335 - 2.347: 95.1747% ( 11) 00:09:55.029 2.347 - 2.359: 95.5263% ( 47) 00:09:55.029 2.359 - 2.370: 95.8629% ( 45) 00:09:55.029 2.370 - 2.382: 96.2445% ( 51) 00:09:55.029 2.382 - 2.394: 96.5662% ( 43) 00:09:55.029 2.394 - 2.406: 96.9552% ( 52) 00:09:55.029 2.406 - 2.418: 97.1273% ( 23) 00:09:55.029 2.418 - 2.430: 97.3442% ( 29) 00:09:55.029 2.430 - 2.441: 97.5163% ( 23) 00:09:55.029 2.441 - 2.453: 97.6659% ( 20) 00:09:55.029 2.453 - 2.465: 97.7856% ( 16) 00:09:55.029 2.465 - 2.477: 97.8754% ( 12) 00:09:55.029 2.477 - 2.489: 97.9352% ( 8) 00:09:55.029 2.489 - 2.501: 98.0100% ( 10) 00:09:55.029 2.501 - 2.513: 98.0325% ( 3) 00:09:55.029 2.513 - 2.524: 98.0774% ( 6) 00:09:55.029 2.524 - 2.536: 98.1222% ( 6) 00:09:55.029 2.536 - 2.548: 98.1671% ( 6) 00:09:55.029 2.548 - 2.560: 98.1971% ( 4) 00:09:55.029 2.560 - 2.572: 98.2195% ( 3) 00:09:55.029 2.572 - 2.584: 98.2270% ( 1) 00:09:55.029 2.584 - 2.596: 98.2494% ( 3) 00:09:55.029 2.596 - 2.607: 98.2719% ( 3) 00:09:55.029 2.607 - 2.619: 98.2943% ( 3) 00:09:55.029 2.619 - 2.631: 98.3018% ( 1) 00:09:55.029 2.631 - 2.643: 98.3093% ( 1) 00:09:55.029 2.643 - 2.655: 98.3242% ( 2) 00:09:55.029 2.655 - 2.667: 98.3317% ( 1) 00:09:55.029 2.667 - 2.679: 98.3616% ( 4) 00:09:55.029 2.679 - 2.690: 98.4065% ( 6) 00:09:55.029 2.690 - 2.702: 9[2024-07-15 13:02:16.322435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:55.029 8.4140% ( 1) 00:09:55.029 2.714 - 2.726: 98.4290% ( 2) 00:09:55.029 2.726 - 2.738: 98.4364% ( 1) 00:09:55.029 2.738 - 2.750: 98.4439% ( 1) 00:09:55.029 2.761 - 2.773: 98.4514% ( 1) 00:09:55.029 2.773 - 2.785: 98.4589% ( 1) 00:09:55.029 2.821 - 2.833: 98.4664% ( 1) 00:09:55.029 2.844 - 2.856: 98.4739% ( 1) 00:09:55.029 2.856 - 2.868: 98.4813% ( 1) 00:09:55.029 2.916 - 2.927: 98.4888% ( 1) 00:09:55.029 2.999 - 3.010: 98.4963% ( 1) 00:09:55.029 3.058 - 3.081: 98.5038% ( 1) 00:09:55.029 3.200 - 3.224: 98.5113% ( 1) 00:09:55.029 3.271 - 3.295: 98.5187% ( 1) 00:09:55.029 3.295 - 3.319: 98.5412% ( 3) 00:09:55.029 3.342 - 3.366: 98.5561% ( 2) 00:09:55.029 3.390 - 3.413: 98.5711% ( 2) 00:09:55.029 3.413 - 3.437: 98.5936% ( 3) 00:09:55.029 3.437 - 3.461: 98.6010% ( 1) 00:09:55.029 3.461 - 3.484: 98.6160% ( 2) 00:09:55.029 3.484 - 3.508: 98.6310% ( 2) 00:09:55.029 3.508 - 3.532: 98.6459% ( 2) 00:09:55.029 3.532 - 3.556: 98.6534% ( 1) 00:09:55.029 3.556 - 3.579: 98.6609% ( 1) 00:09:55.029 3.627 - 3.650: 98.6684% ( 1) 00:09:55.029 3.650 - 3.674: 98.6758% ( 1) 00:09:55.029 3.745 - 3.769: 98.6833% ( 1) 00:09:55.029 3.769 - 3.793: 98.6908% ( 1) 00:09:55.029 4.101 - 4.124: 98.6983% ( 1) 00:09:55.029 4.267 - 4.290: 98.7058% ( 1) 00:09:55.029 4.409 - 4.433: 98.7207% ( 2) 00:09:55.029 5.001 - 5.025: 98.7282% ( 1) 00:09:55.029 5.333 - 5.357: 98.7357% ( 1) 00:09:55.029 5.855 - 5.879: 98.7432% ( 1) 00:09:55.029 6.044 - 6.068: 98.7507% ( 1) 00:09:55.029 6.068 - 6.116: 98.7581% ( 1) 00:09:55.029 6.258 - 6.305: 98.7656% ( 1) 00:09:55.029 6.353 - 6.400: 98.7731% ( 1) 00:09:55.029 6.400 - 6.447: 98.7806% ( 1) 00:09:55.029 6.495 - 6.542: 98.7881% ( 1) 00:09:55.029 6.542 - 6.590: 98.7955% ( 1) 00:09:55.029 6.637 - 6.684: 98.8030% ( 1) 00:09:55.029 6.684 - 6.732: 98.8105% ( 1) 00:09:55.029 6.732 - 6.779: 98.8180% ( 1) 00:09:55.029 7.206 - 7.253: 98.8255% ( 1) 00:09:55.029 7.301 - 7.348: 98.8329% ( 1) 00:09:55.029 8.391 - 8.439: 98.8404% ( 1) 00:09:55.029 8.865 - 8.913: 98.8479% ( 1) 00:09:55.029 9.197 - 9.244: 98.8554% ( 1) 00:09:55.029 15.265 - 15.360: 98.8629% ( 1) 00:09:55.029 15.550 - 15.644: 98.8704% ( 1) 00:09:55.029 15.644 - 15.739: 98.8778% ( 1) 00:09:55.029 15.739 - 15.834: 98.8928% ( 2) 00:09:55.029 15.834 - 15.929: 98.9227% ( 4) 00:09:55.029 15.929 - 16.024: 98.9377% ( 2) 00:09:55.029 16.024 - 16.119: 98.9526% ( 2) 00:09:55.029 16.119 - 16.213: 98.9751% ( 3) 00:09:55.029 16.213 - 16.308: 98.9975% ( 3) 00:09:55.029 16.308 - 16.403: 99.0499% ( 7) 00:09:55.029 16.403 - 16.498: 99.1023% ( 7) 00:09:55.029 16.498 - 16.593: 99.1247% ( 3) 00:09:55.029 16.593 - 16.687: 99.1696% ( 6) 00:09:55.029 16.687 - 16.782: 99.1995% ( 4) 00:09:55.029 16.782 - 16.877: 99.2444% ( 6) 00:09:55.029 16.877 - 16.972: 99.2519% ( 1) 00:09:55.029 16.972 - 17.067: 99.2743% ( 3) 00:09:55.029 17.161 - 17.256: 99.2893% ( 2) 00:09:55.029 17.446 - 17.541: 99.3043% ( 2) 00:09:55.029 17.636 - 17.730: 99.3117% ( 1) 00:09:55.029 17.920 - 18.015: 99.3192% ( 1) 00:09:55.029 18.299 - 18.394: 99.3267% ( 1) 00:09:55.029 18.394 - 18.489: 99.3342% ( 1) 00:09:55.029 18.489 - 18.584: 99.3417% ( 1) 00:09:55.029 2014.625 - 2026.761: 99.3491% ( 1) 00:09:55.029 2196.670 - 2208.806: 99.3566% ( 1) 00:09:55.029 3009.801 - 3021.938: 99.3641% ( 1) 00:09:55.029 3980.705 - 4004.978: 99.8878% ( 70) 00:09:55.029 4004.978 - 4029.250: 99.9850% ( 13) 00:09:55.029 5971.058 - 5995.330: 99.9925% ( 1) 00:09:55.029 5995.330 - 6019.603: 100.0000% ( 1) 00:09:55.029 00:09:55.029 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:55.029 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:55.029 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:55.029 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:55.029 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:55.029 [ 00:09:55.029 { 00:09:55.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:55.029 "subtype": "Discovery", 00:09:55.029 "listen_addresses": [], 00:09:55.029 "allow_any_host": true, 00:09:55.029 "hosts": [] 00:09:55.029 }, 00:09:55.029 { 00:09:55.029 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:55.029 "subtype": "NVMe", 00:09:55.029 "listen_addresses": [ 00:09:55.029 { 00:09:55.029 "trtype": "VFIOUSER", 00:09:55.029 "adrfam": "IPv4", 00:09:55.029 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:55.029 "trsvcid": "0" 00:09:55.029 } 00:09:55.029 ], 00:09:55.029 "allow_any_host": true, 00:09:55.029 "hosts": [], 00:09:55.029 "serial_number": "SPDK1", 00:09:55.029 "model_number": "SPDK bdev Controller", 00:09:55.029 "max_namespaces": 32, 00:09:55.029 "min_cntlid": 1, 00:09:55.029 "max_cntlid": 65519, 00:09:55.029 "namespaces": [ 00:09:55.029 { 00:09:55.029 "nsid": 1, 00:09:55.029 "bdev_name": "Malloc1", 00:09:55.029 "name": "Malloc1", 00:09:55.029 "nguid": "1A13905382FB4E13B6C43AF2FC947B56", 00:09:55.029 "uuid": "1a139053-82fb-4e13-b6c4-3af2fc947b56" 00:09:55.029 } 00:09:55.029 ] 00:09:55.029 }, 00:09:55.029 { 00:09:55.029 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:55.029 "subtype": "NVMe", 00:09:55.029 "listen_addresses": [ 00:09:55.030 { 00:09:55.030 "trtype": "VFIOUSER", 00:09:55.030 "adrfam": "IPv4", 00:09:55.030 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:55.030 "trsvcid": "0" 00:09:55.030 } 00:09:55.030 ], 00:09:55.030 "allow_any_host": true, 00:09:55.030 "hosts": [], 00:09:55.030 "serial_number": "SPDK2", 00:09:55.030 "model_number": "SPDK bdev Controller", 00:09:55.030 "max_namespaces": 32, 00:09:55.030 "min_cntlid": 1, 00:09:55.030 "max_cntlid": 65519, 00:09:55.030 "namespaces": [ 00:09:55.030 { 00:09:55.030 "nsid": 1, 00:09:55.030 "bdev_name": "Malloc2", 00:09:55.030 "name": "Malloc2", 00:09:55.030 "nguid": "DA581A9BBCCC42D1814528059587688B", 00:09:55.030 "uuid": "da581a9b-bccc-42d1-8145-28059587688b" 00:09:55.030 } 00:09:55.030 ] 00:09:55.030 } 00:09:55.030 ] 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3776308 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:55.030 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:55.030 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.288 [2024-07-15 13:02:16.812416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:55.288 Malloc3 00:09:55.288 13:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:55.546 [2024-07-15 13:02:17.166006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:55.546 13:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:55.546 Asynchronous Event Request test 00:09:55.546 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:55.546 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:55.546 Registering asynchronous event callbacks... 00:09:55.546 Starting namespace attribute notice tests for all controllers... 00:09:55.546 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:55.546 aer_cb - Changed Namespace 00:09:55.546 Cleaning up... 00:09:55.805 [ 00:09:55.805 { 00:09:55.805 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:55.805 "subtype": "Discovery", 00:09:55.805 "listen_addresses": [], 00:09:55.805 "allow_any_host": true, 00:09:55.805 "hosts": [] 00:09:55.805 }, 00:09:55.805 { 00:09:55.805 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:55.805 "subtype": "NVMe", 00:09:55.805 "listen_addresses": [ 00:09:55.805 { 00:09:55.805 "trtype": "VFIOUSER", 00:09:55.805 "adrfam": "IPv4", 00:09:55.805 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:55.805 "trsvcid": "0" 00:09:55.805 } 00:09:55.805 ], 00:09:55.805 "allow_any_host": true, 00:09:55.805 "hosts": [], 00:09:55.805 "serial_number": "SPDK1", 00:09:55.805 "model_number": "SPDK bdev Controller", 00:09:55.805 "max_namespaces": 32, 00:09:55.805 "min_cntlid": 1, 00:09:55.805 "max_cntlid": 65519, 00:09:55.805 "namespaces": [ 00:09:55.805 { 00:09:55.805 "nsid": 1, 00:09:55.805 "bdev_name": "Malloc1", 00:09:55.805 "name": "Malloc1", 00:09:55.805 "nguid": "1A13905382FB4E13B6C43AF2FC947B56", 00:09:55.805 "uuid": "1a139053-82fb-4e13-b6c4-3af2fc947b56" 00:09:55.805 }, 00:09:55.805 { 00:09:55.805 "nsid": 2, 00:09:55.805 "bdev_name": "Malloc3", 00:09:55.805 "name": "Malloc3", 00:09:55.805 "nguid": "F7558F283BFB4FE2B2467519162D5EA4", 00:09:55.805 "uuid": "f7558f28-3bfb-4fe2-b246-7519162d5ea4" 00:09:55.805 } 00:09:55.805 ] 00:09:55.805 }, 00:09:55.805 { 00:09:55.805 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:55.805 "subtype": "NVMe", 00:09:55.805 "listen_addresses": [ 00:09:55.805 { 00:09:55.805 "trtype": "VFIOUSER", 00:09:55.805 "adrfam": "IPv4", 00:09:55.805 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:55.805 "trsvcid": "0" 00:09:55.805 } 00:09:55.805 ], 00:09:55.805 "allow_any_host": true, 00:09:55.805 "hosts": [], 00:09:55.805 "serial_number": "SPDK2", 00:09:55.805 "model_number": "SPDK bdev Controller", 00:09:55.805 "max_namespaces": 32, 00:09:55.805 "min_cntlid": 1, 00:09:55.805 "max_cntlid": 65519, 00:09:55.805 "namespaces": [ 00:09:55.805 { 00:09:55.805 "nsid": 1, 00:09:55.805 "bdev_name": "Malloc2", 00:09:55.805 "name": "Malloc2", 00:09:55.805 "nguid": "DA581A9BBCCC42D1814528059587688B", 00:09:55.805 "uuid": "da581a9b-bccc-42d1-8145-28059587688b" 00:09:55.805 } 00:09:55.805 ] 00:09:55.805 } 00:09:55.805 ] 00:09:55.805 13:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3776308 00:09:55.805 13:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:55.805 13:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:55.805 13:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:55.805 13:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:55.805 [2024-07-15 13:02:17.438587] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:09:55.805 [2024-07-15 13:02:17.438625] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776434 ] 00:09:55.805 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.805 [2024-07-15 13:02:17.471942] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:55.805 [2024-07-15 13:02:17.480203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:55.806 [2024-07-15 13:02:17.480232] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbe72d5b000 00:09:55.806 [2024-07-15 13:02:17.481204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.482205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.483231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.484234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.485243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.486251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.487268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.488274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.806 [2024-07-15 13:02:17.489287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:55.806 [2024-07-15 13:02:17.489308] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbe72d50000 00:09:55.806 [2024-07-15 13:02:17.490423] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:56.066 [2024-07-15 13:02:17.506682] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:56.066 [2024-07-15 13:02:17.506716] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:56.066 [2024-07-15 13:02:17.508798] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:56.066 [2024-07-15 13:02:17.508848] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:56.066 [2024-07-15 13:02:17.508953] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:56.066 [2024-07-15 13:02:17.508977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:56.066 [2024-07-15 13:02:17.508988] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:56.066 [2024-07-15 13:02:17.510887] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:56.066 [2024-07-15 13:02:17.510915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:56.066 [2024-07-15 13:02:17.510946] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:56.066 [2024-07-15 13:02:17.511806] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:56.066 [2024-07-15 13:02:17.511825] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:56.066 [2024-07-15 13:02:17.511838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:56.066 [2024-07-15 13:02:17.512812] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:56.066 [2024-07-15 13:02:17.512832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:56.066 [2024-07-15 13:02:17.513820] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:56.066 [2024-07-15 13:02:17.513839] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:56.066 [2024-07-15 13:02:17.513849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:56.066 [2024-07-15 13:02:17.513860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:56.066 [2024-07-15 13:02:17.513970] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:56.066 [2024-07-15 13:02:17.513980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:56.066 [2024-07-15 13:02:17.513989] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:56.066 [2024-07-15 13:02:17.514829] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:56.066 [2024-07-15 13:02:17.515834] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:56.066 [2024-07-15 13:02:17.516844] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:56.066 [2024-07-15 13:02:17.517841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:56.066 [2024-07-15 13:02:17.517942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:56.066 [2024-07-15 13:02:17.518871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:56.066 [2024-07-15 13:02:17.518896] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:56.066 [2024-07-15 13:02:17.518906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:56.066 [2024-07-15 13:02:17.518931] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:56.066 [2024-07-15 13:02:17.518948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:56.066 [2024-07-15 13:02:17.518973] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:56.066 [2024-07-15 13:02:17.518984] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:56.066 [2024-07-15 13:02:17.519001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.524891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.524913] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:56.067 [2024-07-15 13:02:17.524927] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:56.067 [2024-07-15 13:02:17.524935] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:56.067 [2024-07-15 13:02:17.524943] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:56.067 [2024-07-15 13:02:17.524950] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:56.067 [2024-07-15 13:02:17.524958] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:56.067 [2024-07-15 13:02:17.524966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.524979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.524994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.532905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.532932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.067 [2024-07-15 13:02:17.532948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.067 [2024-07-15 13:02:17.532975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.067 [2024-07-15 13:02:17.532988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.067 [2024-07-15 13:02:17.532996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.533012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.533028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.540904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.540922] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:56.067 [2024-07-15 13:02:17.540931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.540942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.540956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.540970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.548900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.548971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.548988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.549001] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:56.067 [2024-07-15 13:02:17.549010] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:56.067 [2024-07-15 13:02:17.549020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.556899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.556923] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:56.067 [2024-07-15 13:02:17.556939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.556955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.556968] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:56.067 [2024-07-15 13:02:17.556976] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:56.067 [2024-07-15 13:02:17.556986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.564898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.564926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.564942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.564956] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:56.067 [2024-07-15 13:02:17.564965] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:56.067 [2024-07-15 13:02:17.564974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.572887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.572909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.572925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.572939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.572950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.572962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.572970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.572978] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:56.067 [2024-07-15 13:02:17.572986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:56.067 [2024-07-15 13:02:17.572995] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:56.067 [2024-07-15 13:02:17.573019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.580885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.580938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.588900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.588926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.596888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.596913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.604905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.604948] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:56.067 [2024-07-15 13:02:17.604960] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:56.067 [2024-07-15 13:02:17.604966] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:56.067 [2024-07-15 13:02:17.604972] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:56.067 [2024-07-15 13:02:17.604982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:56.067 [2024-07-15 13:02:17.604995] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:56.067 [2024-07-15 13:02:17.605004] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:56.067 [2024-07-15 13:02:17.605013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.605024] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:56.067 [2024-07-15 13:02:17.605032] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:56.067 [2024-07-15 13:02:17.605041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.605053] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:56.067 [2024-07-15 13:02:17.605061] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:56.067 [2024-07-15 13:02:17.605070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:56.067 [2024-07-15 13:02:17.612887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.612916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.612935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:56.067 [2024-07-15 13:02:17.612948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:56.067 ===================================================== 00:09:56.067 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:56.067 ===================================================== 00:09:56.067 Controller Capabilities/Features 00:09:56.067 ================================ 00:09:56.067 Vendor ID: 4e58 00:09:56.067 Subsystem Vendor ID: 4e58 00:09:56.067 Serial Number: SPDK2 00:09:56.067 Model Number: SPDK bdev Controller 00:09:56.067 Firmware Version: 24.09 00:09:56.067 Recommended Arb Burst: 6 00:09:56.067 IEEE OUI Identifier: 8d 6b 50 00:09:56.067 Multi-path I/O 00:09:56.067 May have multiple subsystem ports: Yes 00:09:56.067 May have multiple controllers: Yes 00:09:56.067 Associated with SR-IOV VF: No 00:09:56.067 Max Data Transfer Size: 131072 00:09:56.067 Max Number of Namespaces: 32 00:09:56.067 Max Number of I/O Queues: 127 00:09:56.067 NVMe Specification Version (VS): 1.3 00:09:56.067 NVMe Specification Version (Identify): 1.3 00:09:56.068 Maximum Queue Entries: 256 00:09:56.068 Contiguous Queues Required: Yes 00:09:56.068 Arbitration Mechanisms Supported 00:09:56.068 Weighted Round Robin: Not Supported 00:09:56.068 Vendor Specific: Not Supported 00:09:56.068 Reset Timeout: 15000 ms 00:09:56.068 Doorbell Stride: 4 bytes 00:09:56.068 NVM Subsystem Reset: Not Supported 00:09:56.068 Command Sets Supported 00:09:56.068 NVM Command Set: Supported 00:09:56.068 Boot Partition: Not Supported 00:09:56.068 Memory Page Size Minimum: 4096 bytes 00:09:56.068 Memory Page Size Maximum: 4096 bytes 00:09:56.068 Persistent Memory Region: Not Supported 00:09:56.068 Optional Asynchronous Events Supported 00:09:56.068 Namespace Attribute Notices: Supported 00:09:56.068 Firmware Activation Notices: Not Supported 00:09:56.068 ANA Change Notices: Not Supported 00:09:56.068 PLE Aggregate Log Change Notices: Not Supported 00:09:56.068 LBA Status Info Alert Notices: Not Supported 00:09:56.068 EGE Aggregate Log Change Notices: Not Supported 00:09:56.068 Normal NVM Subsystem Shutdown event: Not Supported 00:09:56.068 Zone Descriptor Change Notices: Not Supported 00:09:56.068 Discovery Log Change Notices: Not Supported 00:09:56.068 Controller Attributes 00:09:56.068 128-bit Host Identifier: Supported 00:09:56.068 Non-Operational Permissive Mode: Not Supported 00:09:56.068 NVM Sets: Not Supported 00:09:56.068 Read Recovery Levels: Not Supported 00:09:56.068 Endurance Groups: Not Supported 00:09:56.068 Predictable Latency Mode: Not Supported 00:09:56.068 Traffic Based Keep ALive: Not Supported 00:09:56.068 Namespace Granularity: Not Supported 00:09:56.068 SQ Associations: Not Supported 00:09:56.068 UUID List: Not Supported 00:09:56.068 Multi-Domain Subsystem: Not Supported 00:09:56.068 Fixed Capacity Management: Not Supported 00:09:56.068 Variable Capacity Management: Not Supported 00:09:56.068 Delete Endurance Group: Not Supported 00:09:56.068 Delete NVM Set: Not Supported 00:09:56.068 Extended LBA Formats Supported: Not Supported 00:09:56.068 Flexible Data Placement Supported: Not Supported 00:09:56.068 00:09:56.068 Controller Memory Buffer Support 00:09:56.068 ================================ 00:09:56.068 Supported: No 00:09:56.068 00:09:56.068 Persistent Memory Region Support 00:09:56.068 ================================ 00:09:56.068 Supported: No 00:09:56.068 00:09:56.068 Admin Command Set Attributes 00:09:56.068 ============================ 00:09:56.068 Security Send/Receive: Not Supported 00:09:56.068 Format NVM: Not Supported 00:09:56.068 Firmware Activate/Download: Not Supported 00:09:56.068 Namespace Management: Not Supported 00:09:56.068 Device Self-Test: Not Supported 00:09:56.068 Directives: Not Supported 00:09:56.068 NVMe-MI: Not Supported 00:09:56.068 Virtualization Management: Not Supported 00:09:56.068 Doorbell Buffer Config: Not Supported 00:09:56.068 Get LBA Status Capability: Not Supported 00:09:56.068 Command & Feature Lockdown Capability: Not Supported 00:09:56.068 Abort Command Limit: 4 00:09:56.068 Async Event Request Limit: 4 00:09:56.068 Number of Firmware Slots: N/A 00:09:56.068 Firmware Slot 1 Read-Only: N/A 00:09:56.068 Firmware Activation Without Reset: N/A 00:09:56.068 Multiple Update Detection Support: N/A 00:09:56.068 Firmware Update Granularity: No Information Provided 00:09:56.068 Per-Namespace SMART Log: No 00:09:56.068 Asymmetric Namespace Access Log Page: Not Supported 00:09:56.068 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:56.068 Command Effects Log Page: Supported 00:09:56.068 Get Log Page Extended Data: Supported 00:09:56.068 Telemetry Log Pages: Not Supported 00:09:56.068 Persistent Event Log Pages: Not Supported 00:09:56.068 Supported Log Pages Log Page: May Support 00:09:56.068 Commands Supported & Effects Log Page: Not Supported 00:09:56.068 Feature Identifiers & Effects Log Page:May Support 00:09:56.068 NVMe-MI Commands & Effects Log Page: May Support 00:09:56.068 Data Area 4 for Telemetry Log: Not Supported 00:09:56.068 Error Log Page Entries Supported: 128 00:09:56.068 Keep Alive: Supported 00:09:56.068 Keep Alive Granularity: 10000 ms 00:09:56.068 00:09:56.068 NVM Command Set Attributes 00:09:56.068 ========================== 00:09:56.068 Submission Queue Entry Size 00:09:56.068 Max: 64 00:09:56.068 Min: 64 00:09:56.068 Completion Queue Entry Size 00:09:56.068 Max: 16 00:09:56.068 Min: 16 00:09:56.068 Number of Namespaces: 32 00:09:56.068 Compare Command: Supported 00:09:56.068 Write Uncorrectable Command: Not Supported 00:09:56.068 Dataset Management Command: Supported 00:09:56.068 Write Zeroes Command: Supported 00:09:56.068 Set Features Save Field: Not Supported 00:09:56.068 Reservations: Not Supported 00:09:56.068 Timestamp: Not Supported 00:09:56.068 Copy: Supported 00:09:56.068 Volatile Write Cache: Present 00:09:56.068 Atomic Write Unit (Normal): 1 00:09:56.068 Atomic Write Unit (PFail): 1 00:09:56.068 Atomic Compare & Write Unit: 1 00:09:56.068 Fused Compare & Write: Supported 00:09:56.068 Scatter-Gather List 00:09:56.068 SGL Command Set: Supported (Dword aligned) 00:09:56.068 SGL Keyed: Not Supported 00:09:56.068 SGL Bit Bucket Descriptor: Not Supported 00:09:56.068 SGL Metadata Pointer: Not Supported 00:09:56.068 Oversized SGL: Not Supported 00:09:56.068 SGL Metadata Address: Not Supported 00:09:56.068 SGL Offset: Not Supported 00:09:56.068 Transport SGL Data Block: Not Supported 00:09:56.068 Replay Protected Memory Block: Not Supported 00:09:56.068 00:09:56.068 Firmware Slot Information 00:09:56.068 ========================= 00:09:56.068 Active slot: 1 00:09:56.068 Slot 1 Firmware Revision: 24.09 00:09:56.068 00:09:56.068 00:09:56.068 Commands Supported and Effects 00:09:56.068 ============================== 00:09:56.068 Admin Commands 00:09:56.068 -------------- 00:09:56.068 Get Log Page (02h): Supported 00:09:56.068 Identify (06h): Supported 00:09:56.068 Abort (08h): Supported 00:09:56.068 Set Features (09h): Supported 00:09:56.068 Get Features (0Ah): Supported 00:09:56.068 Asynchronous Event Request (0Ch): Supported 00:09:56.068 Keep Alive (18h): Supported 00:09:56.068 I/O Commands 00:09:56.068 ------------ 00:09:56.068 Flush (00h): Supported LBA-Change 00:09:56.068 Write (01h): Supported LBA-Change 00:09:56.068 Read (02h): Supported 00:09:56.068 Compare (05h): Supported 00:09:56.068 Write Zeroes (08h): Supported LBA-Change 00:09:56.068 Dataset Management (09h): Supported LBA-Change 00:09:56.068 Copy (19h): Supported LBA-Change 00:09:56.068 00:09:56.068 Error Log 00:09:56.068 ========= 00:09:56.068 00:09:56.068 Arbitration 00:09:56.068 =========== 00:09:56.068 Arbitration Burst: 1 00:09:56.068 00:09:56.068 Power Management 00:09:56.068 ================ 00:09:56.068 Number of Power States: 1 00:09:56.068 Current Power State: Power State #0 00:09:56.068 Power State #0: 00:09:56.068 Max Power: 0.00 W 00:09:56.068 Non-Operational State: Operational 00:09:56.068 Entry Latency: Not Reported 00:09:56.068 Exit Latency: Not Reported 00:09:56.068 Relative Read Throughput: 0 00:09:56.068 Relative Read Latency: 0 00:09:56.068 Relative Write Throughput: 0 00:09:56.068 Relative Write Latency: 0 00:09:56.068 Idle Power: Not Reported 00:09:56.068 Active Power: Not Reported 00:09:56.068 Non-Operational Permissive Mode: Not Supported 00:09:56.068 00:09:56.068 Health Information 00:09:56.068 ================== 00:09:56.068 Critical Warnings: 00:09:56.068 Available Spare Space: OK 00:09:56.068 Temperature: OK 00:09:56.068 Device Reliability: OK 00:09:56.068 Read Only: No 00:09:56.068 Volatile Memory Backup: OK 00:09:56.068 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:56.068 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:56.068 Available Spare: 0% 00:09:56.068 Available Sp[2024-07-15 13:02:17.613075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:56.068 [2024-07-15 13:02:17.620890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:56.068 [2024-07-15 13:02:17.620943] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:56.068 [2024-07-15 13:02:17.620961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.068 [2024-07-15 13:02:17.620972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.068 [2024-07-15 13:02:17.620982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.068 [2024-07-15 13:02:17.620993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.068 [2024-07-15 13:02:17.621087] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:56.068 [2024-07-15 13:02:17.621108] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:56.068 [2024-07-15 13:02:17.622084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:56.068 [2024-07-15 13:02:17.622153] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:56.068 [2024-07-15 13:02:17.622182] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:56.068 [2024-07-15 13:02:17.623098] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:56.068 [2024-07-15 13:02:17.623123] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:56.069 [2024-07-15 13:02:17.623178] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:56.069 [2024-07-15 13:02:17.625888] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:56.069 are Threshold: 0% 00:09:56.069 Life Percentage Used: 0% 00:09:56.069 Data Units Read: 0 00:09:56.069 Data Units Written: 0 00:09:56.069 Host Read Commands: 0 00:09:56.069 Host Write Commands: 0 00:09:56.069 Controller Busy Time: 0 minutes 00:09:56.069 Power Cycles: 0 00:09:56.069 Power On Hours: 0 hours 00:09:56.069 Unsafe Shutdowns: 0 00:09:56.069 Unrecoverable Media Errors: 0 00:09:56.069 Lifetime Error Log Entries: 0 00:09:56.069 Warning Temperature Time: 0 minutes 00:09:56.069 Critical Temperature Time: 0 minutes 00:09:56.069 00:09:56.069 Number of Queues 00:09:56.069 ================ 00:09:56.069 Number of I/O Submission Queues: 127 00:09:56.069 Number of I/O Completion Queues: 127 00:09:56.069 00:09:56.069 Active Namespaces 00:09:56.069 ================= 00:09:56.069 Namespace ID:1 00:09:56.069 Error Recovery Timeout: Unlimited 00:09:56.069 Command Set Identifier: NVM (00h) 00:09:56.069 Deallocate: Supported 00:09:56.069 Deallocated/Unwritten Error: Not Supported 00:09:56.069 Deallocated Read Value: Unknown 00:09:56.069 Deallocate in Write Zeroes: Not Supported 00:09:56.069 Deallocated Guard Field: 0xFFFF 00:09:56.069 Flush: Supported 00:09:56.069 Reservation: Supported 00:09:56.069 Namespace Sharing Capabilities: Multiple Controllers 00:09:56.069 Size (in LBAs): 131072 (0GiB) 00:09:56.069 Capacity (in LBAs): 131072 (0GiB) 00:09:56.069 Utilization (in LBAs): 131072 (0GiB) 00:09:56.069 NGUID: DA581A9BBCCC42D1814528059587688B 00:09:56.069 UUID: da581a9b-bccc-42d1-8145-28059587688b 00:09:56.069 Thin Provisioning: Not Supported 00:09:56.069 Per-NS Atomic Units: Yes 00:09:56.069 Atomic Boundary Size (Normal): 0 00:09:56.069 Atomic Boundary Size (PFail): 0 00:09:56.069 Atomic Boundary Offset: 0 00:09:56.069 Maximum Single Source Range Length: 65535 00:09:56.069 Maximum Copy Length: 65535 00:09:56.069 Maximum Source Range Count: 1 00:09:56.069 NGUID/EUI64 Never Reused: No 00:09:56.069 Namespace Write Protected: No 00:09:56.069 Number of LBA Formats: 1 00:09:56.069 Current LBA Format: LBA Format #00 00:09:56.069 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:56.069 00:09:56.069 13:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:56.069 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.327 [2024-07-15 13:02:17.853675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:01.604 Initializing NVMe Controllers 00:10:01.604 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:01.604 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:01.604 Initialization complete. Launching workers. 00:10:01.604 ======================================================== 00:10:01.604 Latency(us) 00:10:01.604 Device Information : IOPS MiB/s Average min max 00:10:01.604 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34353.40 134.19 3725.45 1180.59 8001.97 00:10:01.604 ======================================================== 00:10:01.604 Total : 34353.40 134.19 3725.45 1180.59 8001.97 00:10:01.604 00:10:01.604 [2024-07-15 13:02:22.958256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:01.604 13:02:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:01.604 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.604 [2024-07-15 13:02:23.200956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:06.898 Initializing NVMe Controllers 00:10:06.898 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:06.898 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:06.898 Initialization complete. Launching workers. 00:10:06.898 ======================================================== 00:10:06.898 Latency(us) 00:10:06.898 Device Information : IOPS MiB/s Average min max 00:10:06.898 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31978.97 124.92 4002.19 1207.26 8987.50 00:10:06.898 ======================================================== 00:10:06.898 Total : 31978.97 124.92 4002.19 1207.26 8987.50 00:10:06.898 00:10:06.898 [2024-07-15 13:02:28.224986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:06.898 13:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:06.898 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.898 [2024-07-15 13:02:28.431931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:12.168 [2024-07-15 13:02:33.572042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:12.168 Initializing NVMe Controllers 00:10:12.168 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:12.168 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:12.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:12.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:12.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:12.168 Initialization complete. Launching workers. 00:10:12.168 Starting thread on core 2 00:10:12.168 Starting thread on core 3 00:10:12.168 Starting thread on core 1 00:10:12.168 13:02:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:12.168 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.168 [2024-07-15 13:02:33.866351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.457 [2024-07-15 13:02:36.945455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.457 Initializing NVMe Controllers 00:10:15.457 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.457 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:15.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:15.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:15.457 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:15.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:15.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:15.457 Initialization complete. Launching workers. 00:10:15.457 Starting thread on core 1 with urgent priority queue 00:10:15.457 Starting thread on core 2 with urgent priority queue 00:10:15.457 Starting thread on core 3 with urgent priority queue 00:10:15.457 Starting thread on core 0 with urgent priority queue 00:10:15.457 SPDK bdev Controller (SPDK2 ) core 0: 5179.33 IO/s 19.31 secs/100000 ios 00:10:15.457 SPDK bdev Controller (SPDK2 ) core 1: 5437.67 IO/s 18.39 secs/100000 ios 00:10:15.457 SPDK bdev Controller (SPDK2 ) core 2: 4985.67 IO/s 20.06 secs/100000 ios 00:10:15.457 SPDK bdev Controller (SPDK2 ) core 3: 5603.00 IO/s 17.85 secs/100000 ios 00:10:15.457 ======================================================== 00:10:15.457 00:10:15.457 13:02:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:15.457 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.715 [2024-07-15 13:02:37.243674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.715 Initializing NVMe Controllers 00:10:15.715 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.715 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.715 Namespace ID: 1 size: 0GB 00:10:15.715 Initialization complete. 00:10:15.715 INFO: using host memory buffer for IO 00:10:15.715 Hello world! 00:10:15.715 [2024-07-15 13:02:37.253799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.715 13:02:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:15.715 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.973 [2024-07-15 13:02:37.541575] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.353 Initializing NVMe Controllers 00:10:17.353 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.353 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.353 Initialization complete. Launching workers. 00:10:17.353 submit (in ns) avg, min, max = 8642.5, 3514.4, 4016365.6 00:10:17.353 complete (in ns) avg, min, max = 26308.8, 2056.7, 4015524.4 00:10:17.353 00:10:17.353 Submit histogram 00:10:17.353 ================ 00:10:17.353 Range in us Cumulative Count 00:10:17.353 3.508 - 3.532: 0.1338% ( 18) 00:10:17.353 3.532 - 3.556: 0.5502% ( 56) 00:10:17.353 3.556 - 3.579: 1.9630% ( 190) 00:10:17.353 3.579 - 3.603: 4.9669% ( 404) 00:10:17.353 3.603 - 3.627: 10.5212% ( 747) 00:10:17.353 3.627 - 3.650: 18.2170% ( 1035) 00:10:17.353 3.650 - 3.674: 26.8347% ( 1159) 00:10:17.353 3.674 - 3.698: 34.8576% ( 1079) 00:10:17.353 3.698 - 3.721: 42.9400% ( 1087) 00:10:17.353 3.721 - 3.745: 48.9999% ( 815) 00:10:17.353 3.745 - 3.769: 54.1973% ( 699) 00:10:17.353 3.769 - 3.793: 58.4802% ( 576) 00:10:17.353 3.793 - 3.816: 62.0344% ( 478) 00:10:17.353 3.816 - 3.840: 65.6034% ( 480) 00:10:17.353 3.840 - 3.864: 69.6483% ( 544) 00:10:17.353 3.864 - 3.887: 73.3660% ( 500) 00:10:17.353 3.887 - 3.911: 77.2548% ( 523) 00:10:17.353 3.911 - 3.935: 81.1138% ( 519) 00:10:17.353 3.935 - 3.959: 84.1252% ( 405) 00:10:17.353 3.959 - 3.982: 86.3261% ( 296) 00:10:17.353 3.982 - 4.006: 88.3263% ( 269) 00:10:17.353 4.006 - 4.030: 89.7018% ( 185) 00:10:17.353 4.030 - 4.053: 90.8618% ( 156) 00:10:17.353 4.053 - 4.077: 92.0589% ( 161) 00:10:17.353 4.077 - 4.101: 92.9140% ( 115) 00:10:17.353 4.101 - 4.124: 93.9549% ( 140) 00:10:17.353 4.124 - 4.148: 94.6316% ( 91) 00:10:17.353 4.148 - 4.172: 95.1818% ( 74) 00:10:17.353 4.172 - 4.196: 95.5684% ( 52) 00:10:17.353 4.196 - 4.219: 95.8733% ( 41) 00:10:17.353 4.219 - 4.243: 96.1187% ( 33) 00:10:17.353 4.243 - 4.267: 96.2599% ( 19) 00:10:17.353 4.267 - 4.290: 96.4235% ( 22) 00:10:17.353 4.290 - 4.314: 96.6168% ( 26) 00:10:17.353 4.314 - 4.338: 96.6986% ( 11) 00:10:17.353 4.338 - 4.361: 96.8473% ( 20) 00:10:17.353 4.361 - 4.385: 96.9961% ( 20) 00:10:17.353 4.385 - 4.409: 97.0332% ( 5) 00:10:17.353 4.409 - 4.433: 97.1150% ( 11) 00:10:17.353 4.433 - 4.456: 97.1819% ( 9) 00:10:17.353 4.456 - 4.480: 97.2043% ( 3) 00:10:17.353 4.480 - 4.504: 97.2340% ( 4) 00:10:17.353 4.504 - 4.527: 97.2712% ( 5) 00:10:17.353 4.527 - 4.551: 97.3084% ( 5) 00:10:17.353 4.551 - 4.575: 97.3158% ( 1) 00:10:17.353 4.575 - 4.599: 97.3232% ( 1) 00:10:17.353 4.599 - 4.622: 97.3381% ( 2) 00:10:17.353 4.622 - 4.646: 97.3753% ( 5) 00:10:17.353 4.646 - 4.670: 97.3976% ( 3) 00:10:17.353 4.670 - 4.693: 97.4199% ( 3) 00:10:17.353 4.693 - 4.717: 97.4348% ( 2) 00:10:17.353 4.717 - 4.741: 97.4719% ( 5) 00:10:17.353 4.741 - 4.764: 97.4868% ( 2) 00:10:17.353 4.764 - 4.788: 97.5240% ( 5) 00:10:17.353 4.788 - 4.812: 97.5983% ( 10) 00:10:17.353 4.812 - 4.836: 97.6504% ( 7) 00:10:17.353 4.836 - 4.859: 97.6727% ( 3) 00:10:17.353 4.859 - 4.883: 97.7322% ( 8) 00:10:17.353 4.883 - 4.907: 97.7917% ( 8) 00:10:17.353 4.907 - 4.930: 97.8586% ( 9) 00:10:17.353 4.930 - 4.954: 97.8958% ( 5) 00:10:17.353 4.954 - 4.978: 97.9404% ( 6) 00:10:17.353 4.978 - 5.001: 97.9850% ( 6) 00:10:17.353 5.001 - 5.025: 98.0073% ( 3) 00:10:17.353 5.025 - 5.049: 98.0519% ( 6) 00:10:17.353 5.049 - 5.073: 98.0965% ( 6) 00:10:17.353 5.073 - 5.096: 98.1263% ( 4) 00:10:17.353 5.096 - 5.120: 98.1634% ( 5) 00:10:17.353 5.120 - 5.144: 98.1709% ( 1) 00:10:17.353 5.144 - 5.167: 98.2006% ( 4) 00:10:17.353 5.167 - 5.191: 98.2155% ( 2) 00:10:17.353 5.191 - 5.215: 98.2378% ( 3) 00:10:17.353 5.239 - 5.262: 98.2527% ( 2) 00:10:17.353 5.286 - 5.310: 98.2675% ( 2) 00:10:17.353 5.333 - 5.357: 98.2750% ( 1) 00:10:17.353 5.428 - 5.452: 98.2824% ( 1) 00:10:17.353 5.499 - 5.523: 98.2898% ( 1) 00:10:17.353 5.523 - 5.547: 98.2973% ( 1) 00:10:17.353 5.547 - 5.570: 98.3047% ( 1) 00:10:17.353 5.570 - 5.594: 98.3121% ( 1) 00:10:17.353 5.594 - 5.618: 98.3196% ( 1) 00:10:17.353 5.641 - 5.665: 98.3270% ( 1) 00:10:17.353 5.831 - 5.855: 98.3344% ( 1) 00:10:17.353 5.902 - 5.926: 98.3419% ( 1) 00:10:17.353 5.950 - 5.973: 98.3493% ( 1) 00:10:17.353 6.044 - 6.068: 98.3568% ( 1) 00:10:17.353 6.116 - 6.163: 98.3642% ( 1) 00:10:17.353 6.210 - 6.258: 98.3716% ( 1) 00:10:17.353 6.258 - 6.305: 98.3791% ( 1) 00:10:17.353 6.305 - 6.353: 98.3939% ( 2) 00:10:17.353 6.542 - 6.590: 98.4014% ( 1) 00:10:17.353 6.590 - 6.637: 98.4088% ( 1) 00:10:17.353 6.732 - 6.779: 98.4237% ( 2) 00:10:17.353 6.779 - 6.827: 98.4311% ( 1) 00:10:17.353 6.969 - 7.016: 98.4385% ( 1) 00:10:17.353 7.016 - 7.064: 98.4460% ( 1) 00:10:17.353 7.064 - 7.111: 98.4609% ( 2) 00:10:17.353 7.206 - 7.253: 98.4683% ( 1) 00:10:17.353 7.253 - 7.301: 98.4757% ( 1) 00:10:17.353 7.301 - 7.348: 98.4832% ( 1) 00:10:17.353 7.348 - 7.396: 98.4906% ( 1) 00:10:17.353 7.490 - 7.538: 98.4980% ( 1) 00:10:17.353 7.538 - 7.585: 98.5055% ( 1) 00:10:17.353 7.585 - 7.633: 98.5203% ( 2) 00:10:17.353 7.727 - 7.775: 98.5278% ( 1) 00:10:17.353 7.775 - 7.822: 98.5352% ( 1) 00:10:17.353 7.822 - 7.870: 98.5501% ( 2) 00:10:17.353 7.917 - 7.964: 98.5724% ( 3) 00:10:17.353 7.964 - 8.012: 98.5947% ( 3) 00:10:17.353 8.107 - 8.154: 98.6170% ( 3) 00:10:17.353 8.154 - 8.201: 98.6244% ( 1) 00:10:17.353 8.201 - 8.249: 98.6319% ( 1) 00:10:17.353 8.249 - 8.296: 98.6467% ( 2) 00:10:17.353 8.344 - 8.391: 98.6542% ( 1) 00:10:17.353 8.439 - 8.486: 98.6616% ( 1) 00:10:17.353 8.486 - 8.533: 98.6690% ( 1) 00:10:17.353 8.581 - 8.628: 98.6765% ( 1) 00:10:17.353 8.770 - 8.818: 98.6839% ( 1) 00:10:17.353 8.913 - 8.960: 98.7062% ( 3) 00:10:17.353 9.197 - 9.244: 98.7137% ( 1) 00:10:17.353 9.244 - 9.292: 98.7211% ( 1) 00:10:17.353 9.956 - 10.003: 98.7285% ( 1) 00:10:17.353 10.003 - 10.050: 98.7360% ( 1) 00:10:17.353 10.145 - 10.193: 98.7434% ( 1) 00:10:17.353 10.287 - 10.335: 98.7508% ( 1) 00:10:17.353 10.430 - 10.477: 98.7657% ( 2) 00:10:17.353 10.761 - 10.809: 98.7731% ( 1) 00:10:17.353 11.662 - 11.710: 98.7806% ( 1) 00:10:17.353 11.994 - 12.041: 98.7880% ( 1) 00:10:17.353 12.041 - 12.089: 98.7954% ( 1) 00:10:17.353 12.231 - 12.326: 98.8029% ( 1) 00:10:17.353 12.326 - 12.421: 98.8103% ( 1) 00:10:17.353 12.610 - 12.705: 98.8252% ( 2) 00:10:17.353 12.705 - 12.800: 98.8326% ( 1) 00:10:17.353 12.800 - 12.895: 98.8401% ( 1) 00:10:17.353 12.895 - 12.990: 98.8624% ( 3) 00:10:17.353 13.084 - 13.179: 98.8698% ( 1) 00:10:17.353 13.179 - 13.274: 98.8772% ( 1) 00:10:17.353 13.274 - 13.369: 98.8921% ( 2) 00:10:17.353 13.748 - 13.843: 98.8995% ( 1) 00:10:17.354 14.317 - 14.412: 98.9070% ( 1) 00:10:17.354 14.412 - 14.507: 98.9144% ( 1) 00:10:17.354 14.507 - 14.601: 98.9367% ( 3) 00:10:17.354 15.739 - 15.834: 98.9442% ( 1) 00:10:17.354 17.067 - 17.161: 98.9516% ( 1) 00:10:17.354 17.161 - 17.256: 98.9739% ( 3) 00:10:17.354 17.256 - 17.351: 98.9888% ( 2) 00:10:17.354 17.351 - 17.446: 98.9962% ( 1) 00:10:17.354 17.446 - 17.541: 99.0185% ( 3) 00:10:17.354 17.541 - 17.636: 99.0259% ( 1) 00:10:17.354 17.636 - 17.730: 99.0408% ( 2) 00:10:17.354 17.730 - 17.825: 99.1003% ( 8) 00:10:17.354 17.825 - 17.920: 99.1821% ( 11) 00:10:17.354 17.920 - 18.015: 99.2490% ( 9) 00:10:17.354 18.015 - 18.110: 99.3011% ( 7) 00:10:17.354 18.110 - 18.204: 99.3382% ( 5) 00:10:17.354 18.204 - 18.299: 99.4349% ( 13) 00:10:17.354 18.299 - 18.394: 99.4721% ( 5) 00:10:17.354 18.394 - 18.489: 99.5390% ( 9) 00:10:17.354 18.489 - 18.584: 99.6431% ( 14) 00:10:17.354 18.584 - 18.679: 99.6580% ( 2) 00:10:17.354 18.679 - 18.773: 99.7026% ( 6) 00:10:17.354 18.773 - 18.868: 99.7398% ( 5) 00:10:17.354 18.868 - 18.963: 99.7844% ( 6) 00:10:17.354 19.153 - 19.247: 99.7918% ( 1) 00:10:17.354 19.247 - 19.342: 99.7992% ( 1) 00:10:17.354 19.342 - 19.437: 99.8141% ( 2) 00:10:17.354 19.627 - 19.721: 99.8290% ( 2) 00:10:17.354 20.006 - 20.101: 99.8364% ( 1) 00:10:17.354 22.850 - 22.945: 99.8439% ( 1) 00:10:17.354 23.799 - 23.893: 99.8513% ( 1) 00:10:17.354 24.841 - 25.031: 99.8587% ( 1) 00:10:17.354 25.031 - 25.221: 99.8662% ( 1) 00:10:17.354 26.169 - 26.359: 99.8736% ( 1) 00:10:17.354 28.444 - 28.634: 99.8810% ( 1) 00:10:17.354 2803.484 - 2815.621: 99.8885% ( 1) 00:10:17.354 3980.705 - 4004.978: 99.9703% ( 11) 00:10:17.354 4004.978 - 4029.250: 100.0000% ( 4) 00:10:17.354 00:10:17.354 Complete histogram 00:10:17.354 ================== 00:10:17.354 Range in us Cumulative Count 00:10:17.354 2.050 - 2.062: 0.5725% ( 77) 00:10:17.354 2.062 - 2.074: 30.3889% ( 4010) 00:10:17.354 2.074 - 2.086: 37.0213% ( 892) 00:10:17.354 2.086 - 2.098: 42.9772% ( 801) 00:10:17.354 2.098 - 2.110: 57.5731% ( 1963) 00:10:17.354 2.110 - 2.121: 60.5101% ( 395) 00:10:17.354 2.121 - 2.133: 64.9193% ( 593) 00:10:17.354 2.133 - 2.145: 73.6635% ( 1176) 00:10:17.354 2.145 - 2.157: 74.7193% ( 142) 00:10:17.354 2.157 - 2.169: 77.8199% ( 417) 00:10:17.354 2.169 - 2.181: 80.8833% ( 412) 00:10:17.354 2.181 - 2.193: 81.7756% ( 120) 00:10:17.354 2.193 - 2.204: 83.3668% ( 214) 00:10:17.354 2.204 - 2.216: 87.3671% ( 538) 00:10:17.354 2.216 - 2.228: 89.6498% ( 307) 00:10:17.354 2.228 - 2.240: 91.3079% ( 223) 00:10:17.354 2.240 - 2.252: 92.9363% ( 219) 00:10:17.354 2.252 - 2.264: 93.3527% ( 56) 00:10:17.354 2.264 - 2.276: 93.6055% ( 34) 00:10:17.354 2.276 - 2.287: 93.9772% ( 50) 00:10:17.354 2.287 - 2.299: 94.6539% ( 91) 00:10:17.354 2.299 - 2.311: 95.0182% ( 49) 00:10:17.354 2.311 - 2.323: 95.1669% ( 20) 00:10:17.354 2.323 - 2.335: 95.2487% ( 11) 00:10:17.354 2.335 - 2.347: 95.3602% ( 15) 00:10:17.354 2.347 - 2.359: 95.4346% ( 10) 00:10:17.354 2.359 - 2.370: 95.7246% ( 39) 00:10:17.354 2.370 - 2.382: 96.1261% ( 54) 00:10:17.354 2.382 - 2.394: 96.4458% ( 43) 00:10:17.354 2.394 - 2.406: 96.6392% ( 26) 00:10:17.354 2.406 - 2.418: 96.8994% ( 35) 00:10:17.354 2.418 - 2.430: 97.0927% ( 26) 00:10:17.354 2.430 - 2.441: 97.2712% ( 24) 00:10:17.354 2.441 - 2.453: 97.3976% ( 17) 00:10:17.354 2.453 - 2.465: 97.5537% ( 21) 00:10:17.354 2.465 - 2.477: 97.6578% ( 14) 00:10:17.354 2.477 - 2.489: 97.7470% ( 12) 00:10:17.354 2.489 - 2.501: 97.8214% ( 10) 00:10:17.354 2.501 - 2.513: 97.9106% ( 12) 00:10:17.354 2.513 - 2.524: 97.9924% ( 11) 00:10:17.354 2.524 - 2.536: 98.0668% ( 10) 00:10:17.354 2.536 - 2.548: 98.1039% ( 5) 00:10:17.354 2.548 - 2.560: 98.1709% ( 9) 00:10:17.354 2.560 - 2.572: 98.2006% ( 4) 00:10:17.354 2.572 - 2.584: 98.2452% ( 6) 00:10:17.354 2.584 - 2.596: 98.2527% ( 1) 00:10:17.354 2.596 - 2.607: 98.2675% ( 2) 00:10:17.354 2.607 - 2.619: 98.2973% ( 4) 00:10:17.354 2.631 - 2.643: 98.3196% ( 3) 00:10:17.354 2.643 - 2.655: 98.3419% ( 3) 00:10:17.354 2.667 - 2.679: 98.3493% ( 1) 00:10:17.354 2.714 - 2.726: 98.3791% ( 4) 00:10:17.354 2.726 - 2.738: 98.3865% ( 1) 00:10:17.354 2.738 - 2.750: 98.3939% ( 1) 00:10:17.354 2.750 - 2.761: 98.4014% ( 1) 00:10:17.354 2.773 - 2.785: 98.4162% ( 2) 00:10:17.354 2.809 - 2.821: 98.4237% ( 1) 00:10:17.354 2.821 - 2.833: 98.4311% ( 1) 00:10:17.354 2.833 - 2.844: 98.4385% ( 1) 00:10:17.354 2.868 - 2.880: 98.4460% ( 1) 00:10:17.354 2.880 - 2.892: 98.4534% ( 1) 00:10:17.354 2.892 - 2.904: 98.4609% ( 1) 00:10:17.354 2.904 - 2.916: 98.4683% ( 1) 00:10:17.354 2.916 - 2.927: 98.4757% ( 1) 00:10:17.354 3.129 - 3.153: 98.4832% ( 1) 00:10:17.354 3.153 - 3.176: 98.4906% ( 1) 00:10:17.354 3.224 - 3.247: 98.5055% ( 2) 00:10:17.354 3.247 - 3.271: 98.5129% ( 1) 00:10:17.354 3.366 - 3.390: 98.5203% ( 1) 00:10:17.354 3.556 - 3.579: 98.5278% ( 1) 00:10:17.354 3.603 - 3.627: 98.5352% ( 1) 00:10:17.354 3.627 - 3.650: 98.5501% ( 2) 00:10:17.354 3.650 - 3.674: 98.5575% ( 1) 00:10:17.354 3.674 - 3.698: 9[2024-07-15 13:02:38.635697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.354 8.5724% ( 2) 00:10:17.354 3.698 - 3.721: 98.5873% ( 2) 00:10:17.354 3.721 - 3.745: 98.6170% ( 4) 00:10:17.354 3.745 - 3.769: 98.6244% ( 1) 00:10:17.354 3.769 - 3.793: 98.6319% ( 1) 00:10:17.354 3.793 - 3.816: 98.6542% ( 3) 00:10:17.354 3.840 - 3.864: 98.6616% ( 1) 00:10:17.354 3.864 - 3.887: 98.6765% ( 2) 00:10:17.354 3.911 - 3.935: 98.6914% ( 2) 00:10:17.354 3.982 - 4.006: 98.7062% ( 2) 00:10:17.354 4.030 - 4.053: 98.7137% ( 1) 00:10:17.354 4.053 - 4.077: 98.7211% ( 1) 00:10:17.354 4.219 - 4.243: 98.7285% ( 1) 00:10:17.354 4.812 - 4.836: 98.7434% ( 2) 00:10:17.354 5.239 - 5.262: 98.7508% ( 1) 00:10:17.354 5.333 - 5.357: 98.7583% ( 1) 00:10:17.354 5.381 - 5.404: 98.7657% ( 1) 00:10:17.354 5.736 - 5.760: 98.7731% ( 1) 00:10:17.354 6.068 - 6.116: 98.7806% ( 1) 00:10:17.354 6.116 - 6.163: 98.7880% ( 1) 00:10:17.354 6.163 - 6.210: 98.7954% ( 1) 00:10:17.354 6.210 - 6.258: 98.8029% ( 1) 00:10:17.354 6.684 - 6.732: 98.8103% ( 1) 00:10:17.354 6.732 - 6.779: 98.8178% ( 1) 00:10:17.354 6.874 - 6.921: 98.8252% ( 1) 00:10:17.354 7.064 - 7.111: 98.8326% ( 1) 00:10:17.354 8.107 - 8.154: 98.8401% ( 1) 00:10:17.354 9.150 - 9.197: 98.8475% ( 1) 00:10:17.354 15.455 - 15.550: 98.8549% ( 1) 00:10:17.354 15.550 - 15.644: 98.8624% ( 1) 00:10:17.354 15.644 - 15.739: 98.8772% ( 2) 00:10:17.354 15.739 - 15.834: 98.8847% ( 1) 00:10:17.354 15.834 - 15.929: 98.8995% ( 2) 00:10:17.354 15.929 - 16.024: 98.9293% ( 4) 00:10:17.354 16.024 - 16.119: 98.9516% ( 3) 00:10:17.354 16.119 - 16.213: 98.9739% ( 3) 00:10:17.354 16.213 - 16.308: 99.0557% ( 11) 00:10:17.354 16.403 - 16.498: 99.0780% ( 3) 00:10:17.354 16.498 - 16.593: 99.1077% ( 4) 00:10:17.354 16.593 - 16.687: 99.1672% ( 8) 00:10:17.354 16.687 - 16.782: 99.1895% ( 3) 00:10:17.354 16.782 - 16.877: 99.2341% ( 6) 00:10:17.354 16.877 - 16.972: 99.2565% ( 3) 00:10:17.354 16.972 - 17.067: 99.2713% ( 2) 00:10:17.354 17.067 - 17.161: 99.2936% ( 3) 00:10:17.354 17.161 - 17.256: 99.3159% ( 3) 00:10:17.354 17.256 - 17.351: 99.3457% ( 4) 00:10:17.354 17.351 - 17.446: 99.3531% ( 1) 00:10:17.354 17.825 - 17.920: 99.3605% ( 1) 00:10:17.354 17.920 - 18.015: 99.3680% ( 1) 00:10:17.354 18.015 - 18.110: 99.3754% ( 1) 00:10:17.354 18.110 - 18.204: 99.3829% ( 1) 00:10:17.354 18.394 - 18.489: 99.3903% ( 1) 00:10:17.354 18.773 - 18.868: 99.3977% ( 1) 00:10:17.354 3980.705 - 4004.978: 99.8885% ( 66) 00:10:17.354 4004.978 - 4029.250: 100.0000% ( 15) 00:10:17.354 00:10:17.354 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:17.354 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:17.354 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:17.354 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:17.354 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:17.354 [ 00:10:17.354 { 00:10:17.354 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:17.354 "subtype": "Discovery", 00:10:17.354 "listen_addresses": [], 00:10:17.354 "allow_any_host": true, 00:10:17.354 "hosts": [] 00:10:17.354 }, 00:10:17.354 { 00:10:17.354 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:17.354 "subtype": "NVMe", 00:10:17.354 "listen_addresses": [ 00:10:17.354 { 00:10:17.354 "trtype": "VFIOUSER", 00:10:17.354 "adrfam": "IPv4", 00:10:17.354 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:17.355 "trsvcid": "0" 00:10:17.355 } 00:10:17.355 ], 00:10:17.355 "allow_any_host": true, 00:10:17.355 "hosts": [], 00:10:17.355 "serial_number": "SPDK1", 00:10:17.355 "model_number": "SPDK bdev Controller", 00:10:17.355 "max_namespaces": 32, 00:10:17.355 "min_cntlid": 1, 00:10:17.355 "max_cntlid": 65519, 00:10:17.355 "namespaces": [ 00:10:17.355 { 00:10:17.355 "nsid": 1, 00:10:17.355 "bdev_name": "Malloc1", 00:10:17.355 "name": "Malloc1", 00:10:17.355 "nguid": "1A13905382FB4E13B6C43AF2FC947B56", 00:10:17.355 "uuid": "1a139053-82fb-4e13-b6c4-3af2fc947b56" 00:10:17.355 }, 00:10:17.355 { 00:10:17.355 "nsid": 2, 00:10:17.355 "bdev_name": "Malloc3", 00:10:17.355 "name": "Malloc3", 00:10:17.355 "nguid": "F7558F283BFB4FE2B2467519162D5EA4", 00:10:17.355 "uuid": "f7558f28-3bfb-4fe2-b246-7519162d5ea4" 00:10:17.355 } 00:10:17.355 ] 00:10:17.355 }, 00:10:17.355 { 00:10:17.355 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:17.355 "subtype": "NVMe", 00:10:17.355 "listen_addresses": [ 00:10:17.355 { 00:10:17.355 "trtype": "VFIOUSER", 00:10:17.355 "adrfam": "IPv4", 00:10:17.355 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:17.355 "trsvcid": "0" 00:10:17.355 } 00:10:17.355 ], 00:10:17.355 "allow_any_host": true, 00:10:17.355 "hosts": [], 00:10:17.355 "serial_number": "SPDK2", 00:10:17.355 "model_number": "SPDK bdev Controller", 00:10:17.355 "max_namespaces": 32, 00:10:17.355 "min_cntlid": 1, 00:10:17.355 "max_cntlid": 65519, 00:10:17.355 "namespaces": [ 00:10:17.355 { 00:10:17.355 "nsid": 1, 00:10:17.355 "bdev_name": "Malloc2", 00:10:17.355 "name": "Malloc2", 00:10:17.355 "nguid": "DA581A9BBCCC42D1814528059587688B", 00:10:17.355 "uuid": "da581a9b-bccc-42d1-8145-28059587688b" 00:10:17.355 } 00:10:17.355 ] 00:10:17.355 } 00:10:17.355 ] 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3778964 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:17.355 13:02:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:17.355 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.613 [2024-07-15 13:02:39.110390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.613 Malloc4 00:10:17.613 13:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:17.871 [2024-07-15 13:02:39.455941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.871 13:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:17.871 Asynchronous Event Request test 00:10:17.871 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.871 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.871 Registering asynchronous event callbacks... 00:10:17.871 Starting namespace attribute notice tests for all controllers... 00:10:17.871 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:17.871 aer_cb - Changed Namespace 00:10:17.871 Cleaning up... 00:10:18.129 [ 00:10:18.129 { 00:10:18.129 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:18.129 "subtype": "Discovery", 00:10:18.129 "listen_addresses": [], 00:10:18.129 "allow_any_host": true, 00:10:18.129 "hosts": [] 00:10:18.129 }, 00:10:18.129 { 00:10:18.129 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:18.129 "subtype": "NVMe", 00:10:18.129 "listen_addresses": [ 00:10:18.129 { 00:10:18.129 "trtype": "VFIOUSER", 00:10:18.129 "adrfam": "IPv4", 00:10:18.129 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:18.129 "trsvcid": "0" 00:10:18.129 } 00:10:18.129 ], 00:10:18.129 "allow_any_host": true, 00:10:18.129 "hosts": [], 00:10:18.129 "serial_number": "SPDK1", 00:10:18.129 "model_number": "SPDK bdev Controller", 00:10:18.129 "max_namespaces": 32, 00:10:18.129 "min_cntlid": 1, 00:10:18.129 "max_cntlid": 65519, 00:10:18.130 "namespaces": [ 00:10:18.130 { 00:10:18.130 "nsid": 1, 00:10:18.130 "bdev_name": "Malloc1", 00:10:18.130 "name": "Malloc1", 00:10:18.130 "nguid": "1A13905382FB4E13B6C43AF2FC947B56", 00:10:18.130 "uuid": "1a139053-82fb-4e13-b6c4-3af2fc947b56" 00:10:18.130 }, 00:10:18.130 { 00:10:18.130 "nsid": 2, 00:10:18.130 "bdev_name": "Malloc3", 00:10:18.130 "name": "Malloc3", 00:10:18.130 "nguid": "F7558F283BFB4FE2B2467519162D5EA4", 00:10:18.130 "uuid": "f7558f28-3bfb-4fe2-b246-7519162d5ea4" 00:10:18.130 } 00:10:18.130 ] 00:10:18.130 }, 00:10:18.130 { 00:10:18.130 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:18.130 "subtype": "NVMe", 00:10:18.130 "listen_addresses": [ 00:10:18.130 { 00:10:18.130 "trtype": "VFIOUSER", 00:10:18.130 "adrfam": "IPv4", 00:10:18.130 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:18.130 "trsvcid": "0" 00:10:18.130 } 00:10:18.130 ], 00:10:18.130 "allow_any_host": true, 00:10:18.130 "hosts": [], 00:10:18.130 "serial_number": "SPDK2", 00:10:18.130 "model_number": "SPDK bdev Controller", 00:10:18.130 "max_namespaces": 32, 00:10:18.130 "min_cntlid": 1, 00:10:18.130 "max_cntlid": 65519, 00:10:18.130 "namespaces": [ 00:10:18.130 { 00:10:18.130 "nsid": 1, 00:10:18.130 "bdev_name": "Malloc2", 00:10:18.130 "name": "Malloc2", 00:10:18.130 "nguid": "DA581A9BBCCC42D1814528059587688B", 00:10:18.130 "uuid": "da581a9b-bccc-42d1-8145-28059587688b" 00:10:18.130 }, 00:10:18.130 { 00:10:18.130 "nsid": 2, 00:10:18.130 "bdev_name": "Malloc4", 00:10:18.130 "name": "Malloc4", 00:10:18.130 "nguid": "E83FDC5A49B743BE9C7219C065DFCCCA", 00:10:18.130 "uuid": "e83fdc5a-49b7-43be-9c72-19c065dfccca" 00:10:18.130 } 00:10:18.130 ] 00:10:18.130 } 00:10:18.130 ] 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3778964 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3773228 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3773228 ']' 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3773228 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3773228 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3773228' 00:10:18.130 killing process with pid 3773228 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3773228 00:10:18.130 13:02:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3773228 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3779108 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3779108' 00:10:18.700 Process pid: 3779108 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:18.700 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3779108 00:10:18.701 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3779108 ']' 00:10:18.701 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.701 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.701 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.701 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.701 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:18.701 [2024-07-15 13:02:40.208052] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:18.701 [2024-07-15 13:02:40.209087] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:10:18.701 [2024-07-15 13:02:40.209144] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.701 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.701 [2024-07-15 13:02:40.266786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.701 [2024-07-15 13:02:40.373225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.701 [2024-07-15 13:02:40.373276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.701 [2024-07-15 13:02:40.373304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.701 [2024-07-15 13:02:40.373314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.701 [2024-07-15 13:02:40.373324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.701 [2024-07-15 13:02:40.373406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.701 [2024-07-15 13:02:40.373436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.701 [2024-07-15 13:02:40.373494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.701 [2024-07-15 13:02:40.373496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.959 [2024-07-15 13:02:40.477192] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:18.959 [2024-07-15 13:02:40.477399] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:18.959 [2024-07-15 13:02:40.477688] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:18.959 [2024-07-15 13:02:40.478342] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:18.959 [2024-07-15 13:02:40.478570] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:18.959 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.959 13:02:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:18.959 13:02:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:19.898 13:02:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:20.156 13:02:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:20.156 13:02:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:20.156 13:02:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:20.156 13:02:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:20.156 13:02:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:20.415 Malloc1 00:10:20.415 13:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:20.674 13:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:20.932 13:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:21.188 13:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:21.188 13:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:21.188 13:02:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:21.445 Malloc2 00:10:21.445 13:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:21.707 13:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:21.966 13:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3779108 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3779108 ']' 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3779108 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3779108 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3779108' 00:10:22.225 killing process with pid 3779108 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3779108 00:10:22.225 13:02:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3779108 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:22.817 00:10:22.817 real 0m53.922s 00:10:22.817 user 3m32.861s 00:10:22.817 sys 0m4.386s 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:22.817 ************************************ 00:10:22.817 END TEST nvmf_vfio_user 00:10:22.817 ************************************ 00:10:22.817 13:02:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:22.817 13:02:44 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:22.817 13:02:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.817 13:02:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.817 13:02:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.817 ************************************ 00:10:22.817 START TEST nvmf_vfio_user_nvme_compliance 00:10:22.817 ************************************ 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:22.817 * Looking for test storage... 00:10:22.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3779705 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3779705' 00:10:22.817 Process pid: 3779705 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3779705 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3779705 ']' 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.817 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:22.817 [2024-07-15 13:02:44.378072] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:10:22.817 [2024-07-15 13:02:44.378169] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.817 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.817 [2024-07-15 13:02:44.435317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.081 [2024-07-15 13:02:44.543130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.081 [2024-07-15 13:02:44.543202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.081 [2024-07-15 13:02:44.543230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.081 [2024-07-15 13:02:44.543240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.081 [2024-07-15 13:02:44.543250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.081 [2024-07-15 13:02:44.543328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.081 [2024-07-15 13:02:44.543358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.081 [2024-07-15 13:02:44.543361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.081 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.081 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:23.081 13:02:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.017 malloc0 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.017 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.276 13:02:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:24.276 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.276 00:10:24.276 00:10:24.276 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.276 http://cunit.sourceforge.net/ 00:10:24.276 00:10:24.276 00:10:24.276 Suite: nvme_compliance 00:10:24.276 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 13:02:45.884438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.276 [2024-07-15 13:02:45.885851] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:24.276 [2024-07-15 13:02:45.885896] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:24.276 [2024-07-15 13:02:45.885909] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:24.276 [2024-07-15 13:02:45.887461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.276 passed 00:10:24.276 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 13:02:45.972075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.276 [2024-07-15 13:02:45.975096] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.534 passed 00:10:24.534 Test: admin_identify_ns ...[2024-07-15 13:02:46.063457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.534 [2024-07-15 13:02:46.122909] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:24.534 [2024-07-15 13:02:46.130894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:24.535 [2024-07-15 13:02:46.152019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.535 passed 00:10:24.793 Test: admin_get_features_mandatory_features ...[2024-07-15 13:02:46.235675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.793 [2024-07-15 13:02:46.238714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.793 passed 00:10:24.793 Test: admin_get_features_optional_features ...[2024-07-15 13:02:46.323287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.793 [2024-07-15 13:02:46.326311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.793 passed 00:10:24.793 Test: admin_set_features_number_of_queues ...[2024-07-15 13:02:46.408475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.052 [2024-07-15 13:02:46.513984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.052 passed 00:10:25.052 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 13:02:46.596688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.052 [2024-07-15 13:02:46.599715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.052 passed 00:10:25.052 Test: admin_get_log_page_with_lpo ...[2024-07-15 13:02:46.683895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.052 [2024-07-15 13:02:46.749893] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:25.310 [2024-07-15 13:02:46.762980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.310 passed 00:10:25.310 Test: fabric_property_get ...[2024-07-15 13:02:46.849222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.310 [2024-07-15 13:02:46.850505] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:25.310 [2024-07-15 13:02:46.852243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.310 passed 00:10:25.310 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 13:02:46.933765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.310 [2024-07-15 13:02:46.935104] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:25.310 [2024-07-15 13:02:46.936787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.310 passed 00:10:25.567 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 13:02:47.021572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.567 [2024-07-15 13:02:47.105902] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:25.567 [2024-07-15 13:02:47.121889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:25.567 [2024-07-15 13:02:47.126985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.567 passed 00:10:25.567 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 13:02:47.210616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.567 [2024-07-15 13:02:47.211914] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:25.567 [2024-07-15 13:02:47.213642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.567 passed 00:10:25.824 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 13:02:47.294801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.825 [2024-07-15 13:02:47.371890] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:25.825 [2024-07-15 13:02:47.395891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:25.825 [2024-07-15 13:02:47.401000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.825 passed 00:10:25.825 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 13:02:47.483064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.825 [2024-07-15 13:02:47.484387] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:25.825 [2024-07-15 13:02:47.484426] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:25.825 [2024-07-15 13:02:47.486091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.825 passed 00:10:26.083 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 13:02:47.569366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.083 [2024-07-15 13:02:47.671891] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:26.083 [2024-07-15 13:02:47.679891] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:26.083 [2024-07-15 13:02:47.687890] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:26.083 [2024-07-15 13:02:47.695888] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:26.083 [2024-07-15 13:02:47.724988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.083 passed 00:10:26.343 Test: admin_create_io_sq_verify_pc ...[2024-07-15 13:02:47.805586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.343 [2024-07-15 13:02:47.824902] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:26.343 [2024-07-15 13:02:47.842941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.343 passed 00:10:26.343 Test: admin_create_io_qp_max_qps ...[2024-07-15 13:02:47.925515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:27.720 [2024-07-15 13:02:49.030907] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:27.720 [2024-07-15 13:02:49.419246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:27.977 passed 00:10:27.977 Test: admin_create_io_sq_shared_cq ...[2024-07-15 13:02:49.504428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:27.977 [2024-07-15 13:02:49.635887] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:27.977 [2024-07-15 13:02:49.672996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:28.234 passed 00:10:28.234 00:10:28.234 Run Summary: Type Total Ran Passed Failed Inactive 00:10:28.234 suites 1 1 n/a 0 0 00:10:28.234 tests 18 18 18 0 0 00:10:28.234 asserts 360 360 360 0 n/a 00:10:28.234 00:10:28.234 Elapsed time = 1.572 seconds 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3779705 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3779705 ']' 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3779705 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3779705 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3779705' 00:10:28.234 killing process with pid 3779705 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3779705 00:10:28.234 13:02:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3779705 00:10:28.491 13:02:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:28.491 13:02:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:28.491 00:10:28.491 real 0m5.771s 00:10:28.491 user 0m16.168s 00:10:28.491 sys 0m0.528s 00:10:28.491 13:02:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.491 13:02:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:28.491 ************************************ 00:10:28.492 END TEST nvmf_vfio_user_nvme_compliance 00:10:28.492 ************************************ 00:10:28.492 13:02:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:28.492 13:02:50 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:28.492 13:02:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:28.492 13:02:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.492 13:02:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.492 ************************************ 00:10:28.492 START TEST nvmf_vfio_user_fuzz 00:10:28.492 ************************************ 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:28.492 * Looking for test storage... 00:10:28.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3780430 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3780430' 00:10:28.492 Process pid: 3780430 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3780430 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3780430 ']' 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.492 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:29.060 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.060 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:29.060 13:02:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:29.997 malloc0 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:29.997 13:02:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:02.061 Fuzzing completed. Shutting down the fuzz application 00:11:02.061 00:11:02.061 Dumping successful admin opcodes: 00:11:02.061 8, 9, 10, 24, 00:11:02.061 Dumping successful io opcodes: 00:11:02.061 0, 00:11:02.061 NS: 0x200003a1ef00 I/O qp, Total commands completed: 575075, total successful commands: 2215, random_seed: 3680636160 00:11:02.061 NS: 0x200003a1ef00 admin qp, Total commands completed: 74486, total successful commands: 582, random_seed: 3365016256 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3780430 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3780430 ']' 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3780430 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.061 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3780430 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3780430' 00:11:02.062 killing process with pid 3780430 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3780430 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3780430 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:02.062 00:11:02.062 real 0m32.350s 00:11:02.062 user 0m31.395s 00:11:02.062 sys 0m28.916s 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.062 13:03:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:02.062 ************************************ 00:11:02.062 END TEST nvmf_vfio_user_fuzz 00:11:02.062 ************************************ 00:11:02.062 13:03:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:02.062 13:03:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:02.062 13:03:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:02.062 13:03:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.062 13:03:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.062 ************************************ 00:11:02.062 START TEST nvmf_host_management 00:11:02.062 ************************************ 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:02.062 * Looking for test storage... 00:11:02.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.062 13:03:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:03.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.002 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:03.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:03.003 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:03.003 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:03.003 00:11:03.003 --- 10.0.0.2 ping statistics --- 00:11:03.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.003 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:11:03.003 00:11:03.003 --- 10.0.0.1 ping statistics --- 00:11:03.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.003 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3786497 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3786497 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3786497 ']' 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.003 13:03:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.003 [2024-07-15 13:03:24.613678] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:11:03.003 [2024-07-15 13:03:24.613776] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.003 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.003 [2024-07-15 13:03:24.682965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.262 [2024-07-15 13:03:24.805712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.262 [2024-07-15 13:03:24.805783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.262 [2024-07-15 13:03:24.805799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.262 [2024-07-15 13:03:24.805812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.262 [2024-07-15 13:03:24.805824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.262 [2024-07-15 13:03:24.805927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.262 [2024-07-15 13:03:24.805983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.262 [2024-07-15 13:03:24.806035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:03.262 [2024-07-15 13:03:24.806038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.194 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.195 [2024-07-15 13:03:25.591850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.195 Malloc0 00:11:04.195 [2024-07-15 13:03:25.650883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3786671 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3786671 /var/tmp/bdevperf.sock 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3786671 ']' 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:04.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:04.195 { 00:11:04.195 "params": { 00:11:04.195 "name": "Nvme$subsystem", 00:11:04.195 "trtype": "$TEST_TRANSPORT", 00:11:04.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.195 "adrfam": "ipv4", 00:11:04.195 "trsvcid": "$NVMF_PORT", 00:11:04.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.195 "hdgst": ${hdgst:-false}, 00:11:04.195 "ddgst": ${ddgst:-false} 00:11:04.195 }, 00:11:04.195 "method": "bdev_nvme_attach_controller" 00:11:04.195 } 00:11:04.195 EOF 00:11:04.195 )") 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:04.195 13:03:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:04.195 "params": { 00:11:04.195 "name": "Nvme0", 00:11:04.195 "trtype": "tcp", 00:11:04.195 "traddr": "10.0.0.2", 00:11:04.195 "adrfam": "ipv4", 00:11:04.195 "trsvcid": "4420", 00:11:04.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:04.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:04.195 "hdgst": false, 00:11:04.195 "ddgst": false 00:11:04.195 }, 00:11:04.195 "method": "bdev_nvme_attach_controller" 00:11:04.195 }' 00:11:04.195 [2024-07-15 13:03:25.729064] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:11:04.195 [2024-07-15 13:03:25.729146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786671 ] 00:11:04.195 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.195 [2024-07-15 13:03:25.789587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.452 [2024-07-15 13:03:25.900644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.709 Running I/O for 10 seconds... 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.277 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.277 [2024-07-15 13:03:26.734992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.277 [2024-07-15 13:03:26.735733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.277 [2024-07-15 13:03:26.735748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.735764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.735779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.735797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.735814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.735830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.735845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.735861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.735883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.735918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.735938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.735956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.735972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.735989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.736966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.736984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.737001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.737019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.737036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.737071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.737088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.737104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.737122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.737139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.737156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.737171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.737188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.278 [2024-07-15 13:03:26.737220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.278 [2024-07-15 13:03:26.737237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.279 [2024-07-15 13:03:26.737252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.279 [2024-07-15 13:03:26.737273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:05.279 [2024-07-15 13:03:26.737302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.279 [2024-07-15 13:03:26.737318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235f900 is same with the state(5) to be set 00:11:05.279 [2024-07-15 13:03:26.737390] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x235f900 was disconnected and freed. reset controller. 00:11:05.279 [2024-07-15 13:03:26.737473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.279 [2024-07-15 13:03:26.737498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.279 [2024-07-15 13:03:26.737514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.279 [2024-07-15 13:03:26.737531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.279 [2024-07-15 13:03:26.737550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.279 [2024-07-15 13:03:26.737565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.279 [2024-07-15 13:03:26.737580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.279 [2024-07-15 13:03:26.737594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.279 [2024-07-15 13:03:26.737609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4e790 is same with the state(5) to be set 00:11:05.279 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.279 [2024-07-15 13:03:26.738812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:05.279 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:05.279 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.279 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:05.279 task offset: 88832 on job bdev=Nvme0n1 fails 00:11:05.279 00:11:05.279 Latency(us) 00:11:05.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.279 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:05.279 Job: Nvme0n1 ended in about 0.45 seconds with error 00:11:05.279 Verification LBA range: start 0x0 length 0x400 00:11:05.279 Nvme0n1 : 0.45 1421.28 88.83 142.13 0.00 39877.82 3179.71 34369.99 00:11:05.279 =================================================================================================================== 00:11:05.279 Total : 1421.28 88.83 142.13 0.00 39877.82 3179.71 34369.99 00:11:05.279 [2024-07-15 13:03:26.740693] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:05.279 [2024-07-15 13:03:26.740722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4e790 (9): Bad file descriptor 00:11:05.279 13:03:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.279 13:03:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:05.279 [2024-07-15 13:03:26.747204] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3786671 00:11:06.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3786671) - No such process 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:06.211 { 00:11:06.211 "params": { 00:11:06.211 "name": "Nvme$subsystem", 00:11:06.211 "trtype": "$TEST_TRANSPORT", 00:11:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.211 "adrfam": "ipv4", 00:11:06.211 "trsvcid": "$NVMF_PORT", 00:11:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.211 "hdgst": ${hdgst:-false}, 00:11:06.211 "ddgst": ${ddgst:-false} 00:11:06.211 }, 00:11:06.211 "method": "bdev_nvme_attach_controller" 00:11:06.211 } 00:11:06.211 EOF 00:11:06.211 )") 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:06.211 13:03:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:06.211 "params": { 00:11:06.211 "name": "Nvme0", 00:11:06.211 "trtype": "tcp", 00:11:06.211 "traddr": "10.0.0.2", 00:11:06.211 "adrfam": "ipv4", 00:11:06.211 "trsvcid": "4420", 00:11:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:06.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:06.211 "hdgst": false, 00:11:06.211 "ddgst": false 00:11:06.211 }, 00:11:06.211 "method": "bdev_nvme_attach_controller" 00:11:06.211 }' 00:11:06.211 [2024-07-15 13:03:27.794370] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:11:06.211 [2024-07-15 13:03:27.794458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786942 ] 00:11:06.211 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.211 [2024-07-15 13:03:27.855476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.468 [2024-07-15 13:03:27.966485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.735 Running I/O for 1 seconds... 00:11:07.682 00:11:07.682 Latency(us) 00:11:07.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.682 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:07.682 Verification LBA range: start 0x0 length 0x400 00:11:07.682 Nvme0n1 : 1.03 1547.68 96.73 0.00 0.00 40708.26 9126.49 33593.27 00:11:07.682 =================================================================================================================== 00:11:07.682 Total : 1547.68 96.73 0.00 0.00 40708.26 9126.49 33593.27 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.939 rmmod nvme_tcp 00:11:07.939 rmmod nvme_fabrics 00:11:07.939 rmmod nvme_keyring 00:11:07.939 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3786497 ']' 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3786497 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3786497 ']' 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3786497 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3786497 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3786497' 00:11:07.940 killing process with pid 3786497 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3786497 00:11:07.940 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3786497 00:11:08.197 [2024-07-15 13:03:29.847940] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.197 13:03:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.725 13:03:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.725 13:03:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:10.725 00:11:10.725 real 0m9.432s 00:11:10.725 user 0m23.468s 00:11:10.725 sys 0m2.658s 00:11:10.725 13:03:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.725 13:03:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.725 ************************************ 00:11:10.725 END TEST nvmf_host_management 00:11:10.725 ************************************ 00:11:10.725 13:03:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:10.725 13:03:31 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:10.725 13:03:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.725 13:03:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.725 13:03:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.725 ************************************ 00:11:10.725 START TEST nvmf_lvol 00:11:10.725 ************************************ 00:11:10.725 13:03:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:10.725 * Looking for test storage... 00:11:10.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.725 13:03:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.636 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:12.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:12.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:12.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:12.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.637 13:03:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:11:12.637 00:11:12.637 --- 10.0.0.2 ping statistics --- 00:11:12.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.637 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:11:12.637 00:11:12.637 --- 10.0.0.1 ping statistics --- 00:11:12.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.637 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3789024 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3789024 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3789024 ']' 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.637 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.637 [2024-07-15 13:03:34.084259] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:11:12.637 [2024-07-15 13:03:34.084333] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.637 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.637 [2024-07-15 13:03:34.147272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:12.637 [2024-07-15 13:03:34.257265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.637 [2024-07-15 13:03:34.257323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.637 [2024-07-15 13:03:34.257339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.637 [2024-07-15 13:03:34.257352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.637 [2024-07-15 13:03:34.257365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.637 [2024-07-15 13:03:34.257451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.637 [2024-07-15 13:03:34.257521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.637 [2024-07-15 13:03:34.257523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.895 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.895 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:12.895 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.895 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.895 13:03:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 13:03:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.895 13:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:13.152 [2024-07-15 13:03:34.676282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.152 13:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.411 13:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:13.411 13:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.671 13:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:13.671 13:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:13.928 13:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:14.186 13:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=85be2d30-0e58-427d-93a4-0c17f2dd9bd1 00:11:14.186 13:03:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85be2d30-0e58-427d-93a4-0c17f2dd9bd1 lvol 20 00:11:14.443 13:03:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=686a6fe7-09c7-47a6-9a2d-830f51f12385 00:11:14.443 13:03:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:14.700 13:03:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 686a6fe7-09c7-47a6-9a2d-830f51f12385 00:11:14.957 13:03:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:15.215 [2024-07-15 13:03:36.752507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.215 13:03:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.472 13:03:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3789449 00:11:15.472 13:03:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:15.472 13:03:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:15.472 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.410 13:03:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 686a6fe7-09c7-47a6-9a2d-830f51f12385 MY_SNAPSHOT 00:11:16.667 13:03:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1bfe67d0-90be-46d1-af9d-759601dac4cd 00:11:16.667 13:03:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 686a6fe7-09c7-47a6-9a2d-830f51f12385 30 00:11:16.924 13:03:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1bfe67d0-90be-46d1-af9d-759601dac4cd MY_CLONE 00:11:17.490 13:03:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=df492a89-3ced-4a03-a83a-82b4a73f629d 00:11:17.490 13:03:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate df492a89-3ced-4a03-a83a-82b4a73f629d 00:11:18.054 13:03:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3789449 00:11:26.160 Initializing NVMe Controllers 00:11:26.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:26.160 Controller IO queue size 128, less than required. 00:11:26.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:26.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:26.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:26.160 Initialization complete. Launching workers. 00:11:26.160 ======================================================== 00:11:26.160 Latency(us) 00:11:26.160 Device Information : IOPS MiB/s Average min max 00:11:26.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10122.10 39.54 12649.57 451.44 121312.03 00:11:26.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10552.20 41.22 12133.29 2124.22 49694.74 00:11:26.160 ======================================================== 00:11:26.160 Total : 20674.30 80.76 12386.06 451.44 121312.03 00:11:26.160 00:11:26.160 13:03:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:26.160 13:03:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 686a6fe7-09c7-47a6-9a2d-830f51f12385 00:11:26.418 13:03:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85be2d30-0e58-427d-93a4-0c17f2dd9bd1 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.676 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.676 rmmod nvme_tcp 00:11:26.934 rmmod nvme_fabrics 00:11:26.934 rmmod nvme_keyring 00:11:26.934 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.934 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:26.934 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:26.934 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3789024 ']' 00:11:26.934 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3789024 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3789024 ']' 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3789024 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3789024 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3789024' 00:11:26.935 killing process with pid 3789024 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3789024 00:11:26.935 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3789024 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.193 13:03:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.118 13:03:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.374 00:11:29.374 real 0m18.851s 00:11:29.374 user 1m4.455s 00:11:29.374 sys 0m5.640s 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:29.374 ************************************ 00:11:29.374 END TEST nvmf_lvol 00:11:29.374 ************************************ 00:11:29.374 13:03:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.374 13:03:50 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:29.374 13:03:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.374 13:03:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.374 13:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.374 ************************************ 00:11:29.374 START TEST nvmf_lvs_grow 00:11:29.374 ************************************ 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:29.374 * Looking for test storage... 00:11:29.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.374 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.375 13:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.269 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.270 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.270 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.270 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.270 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.270 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:11:31.270 00:11:31.270 --- 10.0.0.2 ping statistics --- 00:11:31.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.271 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:11:31.271 00:11:31.271 --- 10.0.0.1 ping statistics --- 00:11:31.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.271 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3792706 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3792706 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3792706 ']' 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.271 13:03:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.528 [2024-07-15 13:03:53.003288] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:11:31.528 [2024-07-15 13:03:53.003370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.528 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.528 [2024-07-15 13:03:53.067541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.528 [2024-07-15 13:03:53.174750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.528 [2024-07-15 13:03:53.174811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.528 [2024-07-15 13:03:53.174825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.528 [2024-07-15 13:03:53.174836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.528 [2024-07-15 13:03:53.174846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.528 [2024-07-15 13:03:53.174887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.785 13:03:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.785 13:03:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:31.785 13:03:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.785 13:03:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.785 13:03:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:31.785 13:03:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.785 13:03:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.043 [2024-07-15 13:03:53.592924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:32.043 ************************************ 00:11:32.043 START TEST lvs_grow_clean 00:11:32.043 ************************************ 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:32.043 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:32.301 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:32.301 13:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:32.558 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7af21eab-f056-4779-951e-4f9cb9def063 00:11:32.558 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:32.558 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:32.815 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:32.815 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:32.815 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7af21eab-f056-4779-951e-4f9cb9def063 lvol 150 00:11:33.073 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=58ed3e5b-0824-404f-8950-f139719f9fbd 00:11:33.073 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:33.073 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:33.330 [2024-07-15 13:03:54.936194] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:33.330 [2024-07-15 13:03:54.936272] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:33.330 true 00:11:33.330 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:33.330 13:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:33.588 13:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:33.588 13:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:33.846 13:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58ed3e5b-0824-404f-8950-f139719f9fbd 00:11:34.104 13:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:34.362 [2024-07-15 13:03:55.923224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.362 13:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3793140 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3793140 /var/tmp/bdevperf.sock 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3793140 ']' 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:34.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.621 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:34.621 [2024-07-15 13:03:56.227385] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:11:34.621 [2024-07-15 13:03:56.227453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793140 ] 00:11:34.621 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.621 [2024-07-15 13:03:56.288379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.878 [2024-07-15 13:03:56.406999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.878 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.878 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:34.878 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:35.443 Nvme0n1 00:11:35.443 13:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:35.701 [ 00:11:35.701 { 00:11:35.701 "name": "Nvme0n1", 00:11:35.701 "aliases": [ 00:11:35.701 "58ed3e5b-0824-404f-8950-f139719f9fbd" 00:11:35.701 ], 00:11:35.701 "product_name": "NVMe disk", 00:11:35.701 "block_size": 4096, 00:11:35.701 "num_blocks": 38912, 00:11:35.701 "uuid": "58ed3e5b-0824-404f-8950-f139719f9fbd", 00:11:35.701 "assigned_rate_limits": { 00:11:35.701 "rw_ios_per_sec": 0, 00:11:35.701 "rw_mbytes_per_sec": 0, 00:11:35.701 "r_mbytes_per_sec": 0, 00:11:35.701 "w_mbytes_per_sec": 0 00:11:35.701 }, 00:11:35.701 "claimed": false, 00:11:35.701 "zoned": false, 00:11:35.701 "supported_io_types": { 00:11:35.701 "read": true, 00:11:35.701 "write": true, 00:11:35.701 "unmap": true, 00:11:35.701 "flush": true, 00:11:35.701 "reset": true, 00:11:35.701 "nvme_admin": true, 00:11:35.701 "nvme_io": true, 00:11:35.701 "nvme_io_md": false, 00:11:35.701 "write_zeroes": true, 00:11:35.701 "zcopy": false, 00:11:35.701 "get_zone_info": false, 00:11:35.701 "zone_management": false, 00:11:35.701 "zone_append": false, 00:11:35.701 "compare": true, 00:11:35.701 "compare_and_write": true, 00:11:35.701 "abort": true, 00:11:35.701 "seek_hole": false, 00:11:35.701 "seek_data": false, 00:11:35.701 "copy": true, 00:11:35.701 "nvme_iov_md": false 00:11:35.701 }, 00:11:35.701 "memory_domains": [ 00:11:35.701 { 00:11:35.701 "dma_device_id": "system", 00:11:35.701 "dma_device_type": 1 00:11:35.701 } 00:11:35.701 ], 00:11:35.701 "driver_specific": { 00:11:35.701 "nvme": [ 00:11:35.701 { 00:11:35.701 "trid": { 00:11:35.701 "trtype": "TCP", 00:11:35.701 "adrfam": "IPv4", 00:11:35.701 "traddr": "10.0.0.2", 00:11:35.701 "trsvcid": "4420", 00:11:35.701 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:35.701 }, 00:11:35.701 "ctrlr_data": { 00:11:35.701 "cntlid": 1, 00:11:35.701 "vendor_id": "0x8086", 00:11:35.701 "model_number": "SPDK bdev Controller", 00:11:35.701 "serial_number": "SPDK0", 00:11:35.701 "firmware_revision": "24.09", 00:11:35.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:35.701 "oacs": { 00:11:35.701 "security": 0, 00:11:35.701 "format": 0, 00:11:35.701 "firmware": 0, 00:11:35.701 "ns_manage": 0 00:11:35.701 }, 00:11:35.701 "multi_ctrlr": true, 00:11:35.702 "ana_reporting": false 00:11:35.702 }, 00:11:35.702 "vs": { 00:11:35.702 "nvme_version": "1.3" 00:11:35.702 }, 00:11:35.702 "ns_data": { 00:11:35.702 "id": 1, 00:11:35.702 "can_share": true 00:11:35.702 } 00:11:35.702 } 00:11:35.702 ], 00:11:35.702 "mp_policy": "active_passive" 00:11:35.702 } 00:11:35.702 } 00:11:35.702 ] 00:11:35.702 13:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3793277 00:11:35.702 13:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:35.702 13:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:35.702 Latency(us) 00:11:35.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.702 Running I/O for 10 seconds... 00:11:36.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.635 Nvme0n1 : 1.00 13957.00 54.52 0.00 0.00 0.00 0.00 0.00 00:11:36.635 =================================================================================================================== 00:11:36.635 Total : 13957.00 54.52 0.00 0.00 0.00 0.00 0.00 00:11:36.635 00:11:37.568 13:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:37.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.825 Nvme0n1 : 2.00 14130.50 55.20 0.00 0.00 0.00 0.00 0.00 00:11:37.826 =================================================================================================================== 00:11:37.826 Total : 14130.50 55.20 0.00 0.00 0.00 0.00 0.00 00:11:37.826 00:11:37.826 true 00:11:37.826 13:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:37.826 13:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:38.084 13:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:38.084 13:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:38.084 13:03:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3793277 00:11:38.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.650 Nvme0n1 : 3.00 14253.00 55.68 0.00 0.00 0.00 0.00 0.00 00:11:38.650 =================================================================================================================== 00:11:38.650 Total : 14253.00 55.68 0.00 0.00 0.00 0.00 0.00 00:11:38.650 00:11:40.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.022 Nvme0n1 : 4.00 14376.50 56.16 0.00 0.00 0.00 0.00 0.00 00:11:40.022 =================================================================================================================== 00:11:40.022 Total : 14376.50 56.16 0.00 0.00 0.00 0.00 0.00 00:11:40.022 00:11:40.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.951 Nvme0n1 : 5.00 14437.00 56.39 0.00 0.00 0.00 0.00 0.00 00:11:40.951 =================================================================================================================== 00:11:40.951 Total : 14437.00 56.39 0.00 0.00 0.00 0.00 0.00 00:11:40.951 00:11:41.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.883 Nvme0n1 : 6.00 14500.17 56.64 0.00 0.00 0.00 0.00 0.00 00:11:41.883 =================================================================================================================== 00:11:41.883 Total : 14500.17 56.64 0.00 0.00 0.00 0.00 0.00 00:11:41.883 00:11:42.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.816 Nvme0n1 : 7.00 14561.57 56.88 0.00 0.00 0.00 0.00 0.00 00:11:42.816 =================================================================================================================== 00:11:42.816 Total : 14561.57 56.88 0.00 0.00 0.00 0.00 0.00 00:11:42.816 00:11:43.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.748 Nvme0n1 : 8.00 14608.50 57.06 0.00 0.00 0.00 0.00 0.00 00:11:43.748 =================================================================================================================== 00:11:43.748 Total : 14608.50 57.06 0.00 0.00 0.00 0.00 0.00 00:11:43.748 00:11:44.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.677 Nvme0n1 : 9.00 14645.67 57.21 0.00 0.00 0.00 0.00 0.00 00:11:44.677 =================================================================================================================== 00:11:44.677 Total : 14645.67 57.21 0.00 0.00 0.00 0.00 0.00 00:11:44.677 00:11:45.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.663 Nvme0n1 : 10.00 14675.00 57.32 0.00 0.00 0.00 0.00 0.00 00:11:45.663 =================================================================================================================== 00:11:45.663 Total : 14675.00 57.32 0.00 0.00 0.00 0.00 0.00 00:11:45.663 00:11:45.663 00:11:45.663 Latency(us) 00:11:45.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.663 Nvme0n1 : 10.00 14681.67 57.35 0.00 0.00 8713.51 5024.43 17282.09 00:11:45.663 =================================================================================================================== 00:11:45.663 Total : 14681.67 57.35 0.00 0.00 8713.51 5024.43 17282.09 00:11:45.663 { 00:11:45.663 "core_count": 1, 00:11:45.663 "test_results": [ 00:11:45.663 { 00:11:45.663 "job": "Nvme0n1", 00:11:45.663 "test_status": "finished", 00:11:45.663 "core_mask": "0x2", 00:11:45.663 "workload": "randwrite", 00:11:45.663 "queue_depth": 128, 00:11:45.663 "io_size": 4096, 00:11:45.663 "runtime": 10.004173278808594, 00:11:45.663 "io_per_second": 14681.67333771617, 00:11:45.663 "MiB_per_second": 57.35028647545379, 00:11:45.663 "fails_per_second": 0.0, 00:11:45.663 "timeout_per_second": 0.0, 00:11:45.663 "average_latency_us": 8713.509569978207, 00:11:45.663 "min_latency_us": 5024.426666666666, 00:11:45.663 "max_latency_us": 17282.085925925927 00:11:45.663 } 00:11:45.663 ] 00:11:45.663 } 00:11:45.663 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3793140 00:11:45.663 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3793140 ']' 00:11:45.663 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3793140 00:11:45.663 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:45.663 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.663 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3793140 00:11:45.920 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:45.920 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:45.920 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3793140' 00:11:45.920 killing process with pid 3793140 00:11:45.920 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3793140 00:11:45.920 Received shutdown signal, test time was about 10.000000 seconds 00:11:45.920 00:11:45.920 Latency(us) 00:11:45.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.920 =================================================================================================================== 00:11:45.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:45.920 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3793140 00:11:46.177 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:46.435 13:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:46.692 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:46.692 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:46.949 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:46.949 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:46.949 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:47.208 [2024-07-15 13:04:08.713747] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:47.208 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:47.466 request: 00:11:47.466 { 00:11:47.466 "uuid": "7af21eab-f056-4779-951e-4f9cb9def063", 00:11:47.466 "method": "bdev_lvol_get_lvstores", 00:11:47.466 "req_id": 1 00:11:47.466 } 00:11:47.466 Got JSON-RPC error response 00:11:47.466 response: 00:11:47.466 { 00:11:47.466 "code": -19, 00:11:47.466 "message": "No such device" 00:11:47.466 } 00:11:47.466 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:47.466 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.466 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.466 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.466 13:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:47.725 aio_bdev 00:11:47.725 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 58ed3e5b-0824-404f-8950-f139719f9fbd 00:11:47.725 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=58ed3e5b-0824-404f-8950-f139719f9fbd 00:11:47.725 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:47.725 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:47.725 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:47.725 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:47.725 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:47.984 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58ed3e5b-0824-404f-8950-f139719f9fbd -t 2000 00:11:48.242 [ 00:11:48.243 { 00:11:48.243 "name": "58ed3e5b-0824-404f-8950-f139719f9fbd", 00:11:48.243 "aliases": [ 00:11:48.243 "lvs/lvol" 00:11:48.243 ], 00:11:48.243 "product_name": "Logical Volume", 00:11:48.243 "block_size": 4096, 00:11:48.243 "num_blocks": 38912, 00:11:48.243 "uuid": "58ed3e5b-0824-404f-8950-f139719f9fbd", 00:11:48.243 "assigned_rate_limits": { 00:11:48.243 "rw_ios_per_sec": 0, 00:11:48.243 "rw_mbytes_per_sec": 0, 00:11:48.243 "r_mbytes_per_sec": 0, 00:11:48.243 "w_mbytes_per_sec": 0 00:11:48.243 }, 00:11:48.243 "claimed": false, 00:11:48.243 "zoned": false, 00:11:48.243 "supported_io_types": { 00:11:48.243 "read": true, 00:11:48.243 "write": true, 00:11:48.243 "unmap": true, 00:11:48.243 "flush": false, 00:11:48.243 "reset": true, 00:11:48.243 "nvme_admin": false, 00:11:48.243 "nvme_io": false, 00:11:48.243 "nvme_io_md": false, 00:11:48.243 "write_zeroes": true, 00:11:48.243 "zcopy": false, 00:11:48.243 "get_zone_info": false, 00:11:48.243 "zone_management": false, 00:11:48.243 "zone_append": false, 00:11:48.243 "compare": false, 00:11:48.243 "compare_and_write": false, 00:11:48.243 "abort": false, 00:11:48.243 "seek_hole": true, 00:11:48.243 "seek_data": true, 00:11:48.243 "copy": false, 00:11:48.243 "nvme_iov_md": false 00:11:48.243 }, 00:11:48.243 "driver_specific": { 00:11:48.243 "lvol": { 00:11:48.243 "lvol_store_uuid": "7af21eab-f056-4779-951e-4f9cb9def063", 00:11:48.243 "base_bdev": "aio_bdev", 00:11:48.243 "thin_provision": false, 00:11:48.243 "num_allocated_clusters": 38, 00:11:48.243 "snapshot": false, 00:11:48.243 "clone": false, 00:11:48.243 "esnap_clone": false 00:11:48.243 } 00:11:48.243 } 00:11:48.243 } 00:11:48.243 ] 00:11:48.243 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:48.243 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:48.243 13:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:48.501 13:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:48.501 13:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:48.501 13:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:48.757 13:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:48.758 13:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58ed3e5b-0824-404f-8950-f139719f9fbd 00:11:49.015 13:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7af21eab-f056-4779-951e-4f9cb9def063 00:11:49.273 13:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:49.531 00:11:49.531 real 0m17.507s 00:11:49.531 user 0m17.006s 00:11:49.531 sys 0m1.885s 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:49.531 ************************************ 00:11:49.531 END TEST lvs_grow_clean 00:11:49.531 ************************************ 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:49.531 ************************************ 00:11:49.531 START TEST lvs_grow_dirty 00:11:49.531 ************************************ 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:49.531 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:50.098 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:50.098 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:50.356 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=40f6df54-964e-4083-b1a3-86ce4748264c 00:11:50.356 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:11:50.356 13:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:50.613 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:50.613 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:50.613 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40f6df54-964e-4083-b1a3-86ce4748264c lvol 150 00:11:50.870 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=df272c1f-044f-40aa-a76c-474af0967b11 00:11:50.870 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:50.870 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:51.127 [2024-07-15 13:04:12.602353] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:51.127 [2024-07-15 13:04:12.602434] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:51.127 true 00:11:51.127 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:11:51.127 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:51.385 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:51.385 13:04:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:51.642 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df272c1f-044f-40aa-a76c-474af0967b11 00:11:51.900 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:52.157 [2024-07-15 13:04:13.609586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.157 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3795203 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3795203 /var/tmp/bdevperf.sock 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3795203 ']' 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.415 13:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:52.415 [2024-07-15 13:04:13.907056] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:11:52.415 [2024-07-15 13:04:13.907149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795203 ] 00:11:52.415 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.415 [2024-07-15 13:04:13.968054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.415 [2024-07-15 13:04:14.084169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.673 13:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.673 13:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:52.673 13:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:53.238 Nvme0n1 00:11:53.238 13:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:53.496 [ 00:11:53.496 { 00:11:53.496 "name": "Nvme0n1", 00:11:53.496 "aliases": [ 00:11:53.496 "df272c1f-044f-40aa-a76c-474af0967b11" 00:11:53.496 ], 00:11:53.496 "product_name": "NVMe disk", 00:11:53.496 "block_size": 4096, 00:11:53.496 "num_blocks": 38912, 00:11:53.496 "uuid": "df272c1f-044f-40aa-a76c-474af0967b11", 00:11:53.496 "assigned_rate_limits": { 00:11:53.496 "rw_ios_per_sec": 0, 00:11:53.496 "rw_mbytes_per_sec": 0, 00:11:53.496 "r_mbytes_per_sec": 0, 00:11:53.496 "w_mbytes_per_sec": 0 00:11:53.496 }, 00:11:53.496 "claimed": false, 00:11:53.496 "zoned": false, 00:11:53.496 "supported_io_types": { 00:11:53.496 "read": true, 00:11:53.496 "write": true, 00:11:53.496 "unmap": true, 00:11:53.496 "flush": true, 00:11:53.496 "reset": true, 00:11:53.496 "nvme_admin": true, 00:11:53.496 "nvme_io": true, 00:11:53.496 "nvme_io_md": false, 00:11:53.496 "write_zeroes": true, 00:11:53.496 "zcopy": false, 00:11:53.496 "get_zone_info": false, 00:11:53.496 "zone_management": false, 00:11:53.496 "zone_append": false, 00:11:53.496 "compare": true, 00:11:53.496 "compare_and_write": true, 00:11:53.496 "abort": true, 00:11:53.496 "seek_hole": false, 00:11:53.496 "seek_data": false, 00:11:53.496 "copy": true, 00:11:53.496 "nvme_iov_md": false 00:11:53.496 }, 00:11:53.496 "memory_domains": [ 00:11:53.496 { 00:11:53.496 "dma_device_id": "system", 00:11:53.496 "dma_device_type": 1 00:11:53.496 } 00:11:53.496 ], 00:11:53.496 "driver_specific": { 00:11:53.496 "nvme": [ 00:11:53.496 { 00:11:53.496 "trid": { 00:11:53.496 "trtype": "TCP", 00:11:53.496 "adrfam": "IPv4", 00:11:53.496 "traddr": "10.0.0.2", 00:11:53.496 "trsvcid": "4420", 00:11:53.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:53.496 }, 00:11:53.496 "ctrlr_data": { 00:11:53.496 "cntlid": 1, 00:11:53.496 "vendor_id": "0x8086", 00:11:53.496 "model_number": "SPDK bdev Controller", 00:11:53.496 "serial_number": "SPDK0", 00:11:53.496 "firmware_revision": "24.09", 00:11:53.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:53.496 "oacs": { 00:11:53.496 "security": 0, 00:11:53.496 "format": 0, 00:11:53.496 "firmware": 0, 00:11:53.496 "ns_manage": 0 00:11:53.496 }, 00:11:53.496 "multi_ctrlr": true, 00:11:53.496 "ana_reporting": false 00:11:53.496 }, 00:11:53.496 "vs": { 00:11:53.496 "nvme_version": "1.3" 00:11:53.496 }, 00:11:53.496 "ns_data": { 00:11:53.496 "id": 1, 00:11:53.496 "can_share": true 00:11:53.496 } 00:11:53.496 } 00:11:53.496 ], 00:11:53.496 "mp_policy": "active_passive" 00:11:53.496 } 00:11:53.496 } 00:11:53.496 ] 00:11:53.496 13:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3795336 00:11:53.496 13:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:53.496 13:04:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:53.496 Latency(us) 00:11:53.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.496 Running I/O for 10 seconds... 00:11:54.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.431 Nvme0n1 : 1.00 14239.00 55.62 0.00 0.00 0.00 0.00 0.00 00:11:54.431 =================================================================================================================== 00:11:54.431 Total : 14239.00 55.62 0.00 0.00 0.00 0.00 0.00 00:11:54.431 00:11:55.362 13:04:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:11:55.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.620 Nvme0n1 : 2.00 14468.00 56.52 0.00 0.00 0.00 0.00 0.00 00:11:55.620 =================================================================================================================== 00:11:55.620 Total : 14468.00 56.52 0.00 0.00 0.00 0.00 0.00 00:11:55.620 00:11:55.620 true 00:11:55.620 13:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:11:55.620 13:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:55.878 13:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:55.878 13:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:55.878 13:04:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3795336 00:11:56.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.445 Nvme0n1 : 3.00 14548.33 56.83 0.00 0.00 0.00 0.00 0.00 00:11:56.445 =================================================================================================================== 00:11:56.445 Total : 14548.33 56.83 0.00 0.00 0.00 0.00 0.00 00:11:56.445 00:11:57.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.819 Nvme0n1 : 4.00 14710.75 57.46 0.00 0.00 0.00 0.00 0.00 00:11:57.819 =================================================================================================================== 00:11:57.819 Total : 14710.75 57.46 0.00 0.00 0.00 0.00 0.00 00:11:57.819 00:11:58.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.751 Nvme0n1 : 5.00 14793.20 57.79 0.00 0.00 0.00 0.00 0.00 00:11:58.751 =================================================================================================================== 00:11:58.751 Total : 14793.20 57.79 0.00 0.00 0.00 0.00 0.00 00:11:58.751 00:11:59.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.683 Nvme0n1 : 6.00 14848.67 58.00 0.00 0.00 0.00 0.00 0.00 00:11:59.683 =================================================================================================================== 00:11:59.683 Total : 14848.67 58.00 0.00 0.00 0.00 0.00 0.00 00:11:59.683 00:12:00.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.622 Nvme0n1 : 7.00 14923.57 58.30 0.00 0.00 0.00 0.00 0.00 00:12:00.622 =================================================================================================================== 00:12:00.622 Total : 14923.57 58.30 0.00 0.00 0.00 0.00 0.00 00:12:00.622 00:12:01.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.555 Nvme0n1 : 8.00 14972.50 58.49 0.00 0.00 0.00 0.00 0.00 00:12:01.555 =================================================================================================================== 00:12:01.555 Total : 14972.50 58.49 0.00 0.00 0.00 0.00 0.00 00:12:01.555 00:12:02.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.485 Nvme0n1 : 9.00 14995.33 58.58 0.00 0.00 0.00 0.00 0.00 00:12:02.485 =================================================================================================================== 00:12:02.485 Total : 14995.33 58.58 0.00 0.00 0.00 0.00 0.00 00:12:02.486 00:12:03.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.416 Nvme0n1 : 10.00 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:12:03.416 =================================================================================================================== 00:12:03.416 Total : 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:12:03.416 00:12:03.673 00:12:03.673 Latency(us) 00:12:03.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.673 Nvme0n1 : 10.01 15002.72 58.60 0.00 0.00 8526.91 3932.16 16699.54 00:12:03.673 =================================================================================================================== 00:12:03.673 Total : 15002.72 58.60 0.00 0.00 8526.91 3932.16 16699.54 00:12:03.673 { 00:12:03.673 "core_count": 1, 00:12:03.673 "test_results": [ 00:12:03.673 { 00:12:03.673 "job": "Nvme0n1", 00:12:03.673 "test_status": "finished", 00:12:03.673 "core_mask": "0x2", 00:12:03.673 "workload": "randwrite", 00:12:03.673 "queue_depth": 128, 00:12:03.673 "io_size": 4096, 00:12:03.673 "runtime": 10.00871753692627, 00:12:03.673 "io_per_second": 15002.720628156374, 00:12:03.673 "MiB_per_second": 58.604377453735836, 00:12:03.673 "fails_per_second": 0.0, 00:12:03.673 "timeout_per_second": 0.0, 00:12:03.673 "average_latency_us": 8526.90723807466, 00:12:03.673 "min_latency_us": 3932.16, 00:12:03.673 "max_latency_us": 16699.543703703705 00:12:03.673 } 00:12:03.673 ] 00:12:03.673 } 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3795203 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3795203 ']' 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3795203 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3795203 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3795203' 00:12:03.673 killing process with pid 3795203 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3795203 00:12:03.673 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.673 00:12:03.673 Latency(us) 00:12:03.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.673 =================================================================================================================== 00:12:03.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.673 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3795203 00:12:03.931 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:04.188 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:04.445 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:04.445 13:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3792706 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3792706 00:12:04.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3792706 Killed "${NVMF_APP[@]}" "$@" 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3796669 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3796669 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3796669 ']' 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.703 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:04.703 [2024-07-15 13:04:26.317026] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:04.703 [2024-07-15 13:04:26.317110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.703 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.703 [2024-07-15 13:04:26.395715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.960 [2024-07-15 13:04:26.529324] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.960 [2024-07-15 13:04:26.529408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.960 [2024-07-15 13:04:26.529433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.960 [2024-07-15 13:04:26.529467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.960 [2024-07-15 13:04:26.529486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.960 [2024-07-15 13:04:26.529530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.960 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.960 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:04.960 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.960 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:04.960 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:05.218 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.218 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:05.218 [2024-07-15 13:04:26.910519] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:05.218 [2024-07-15 13:04:26.910645] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:05.218 [2024-07-15 13:04:26.910691] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev df272c1f-044f-40aa-a76c-474af0967b11 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=df272c1f-044f-40aa-a76c-474af0967b11 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:05.475 13:04:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:05.732 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df272c1f-044f-40aa-a76c-474af0967b11 -t 2000 00:12:05.990 [ 00:12:05.990 { 00:12:05.990 "name": "df272c1f-044f-40aa-a76c-474af0967b11", 00:12:05.990 "aliases": [ 00:12:05.990 "lvs/lvol" 00:12:05.990 ], 00:12:05.990 "product_name": "Logical Volume", 00:12:05.990 "block_size": 4096, 00:12:05.990 "num_blocks": 38912, 00:12:05.990 "uuid": "df272c1f-044f-40aa-a76c-474af0967b11", 00:12:05.990 "assigned_rate_limits": { 00:12:05.990 "rw_ios_per_sec": 0, 00:12:05.990 "rw_mbytes_per_sec": 0, 00:12:05.990 "r_mbytes_per_sec": 0, 00:12:05.990 "w_mbytes_per_sec": 0 00:12:05.990 }, 00:12:05.990 "claimed": false, 00:12:05.990 "zoned": false, 00:12:05.990 "supported_io_types": { 00:12:05.990 "read": true, 00:12:05.990 "write": true, 00:12:05.990 "unmap": true, 00:12:05.990 "flush": false, 00:12:05.990 "reset": true, 00:12:05.990 "nvme_admin": false, 00:12:05.990 "nvme_io": false, 00:12:05.990 "nvme_io_md": false, 00:12:05.990 "write_zeroes": true, 00:12:05.990 "zcopy": false, 00:12:05.990 "get_zone_info": false, 00:12:05.990 "zone_management": false, 00:12:05.990 "zone_append": false, 00:12:05.990 "compare": false, 00:12:05.990 "compare_and_write": false, 00:12:05.990 "abort": false, 00:12:05.990 "seek_hole": true, 00:12:05.990 "seek_data": true, 00:12:05.990 "copy": false, 00:12:05.990 "nvme_iov_md": false 00:12:05.990 }, 00:12:05.990 "driver_specific": { 00:12:05.990 "lvol": { 00:12:05.990 "lvol_store_uuid": "40f6df54-964e-4083-b1a3-86ce4748264c", 00:12:05.990 "base_bdev": "aio_bdev", 00:12:05.990 "thin_provision": false, 00:12:05.990 "num_allocated_clusters": 38, 00:12:05.990 "snapshot": false, 00:12:05.990 "clone": false, 00:12:05.990 "esnap_clone": false 00:12:05.990 } 00:12:05.990 } 00:12:05.990 } 00:12:05.990 ] 00:12:05.990 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:05.990 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:05.990 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:06.248 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:06.248 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:06.248 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:06.507 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:06.507 13:04:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:06.507 [2024-07-15 13:04:28.203748] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:06.765 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:06.765 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:06.765 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:06.765 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.765 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.765 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.765 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.766 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.766 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.766 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.766 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:06.766 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:07.023 request: 00:12:07.023 { 00:12:07.023 "uuid": "40f6df54-964e-4083-b1a3-86ce4748264c", 00:12:07.023 "method": "bdev_lvol_get_lvstores", 00:12:07.023 "req_id": 1 00:12:07.023 } 00:12:07.023 Got JSON-RPC error response 00:12:07.023 response: 00:12:07.023 { 00:12:07.023 "code": -19, 00:12:07.023 "message": "No such device" 00:12:07.023 } 00:12:07.023 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:07.023 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.023 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.023 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.023 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:07.280 aio_bdev 00:12:07.280 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df272c1f-044f-40aa-a76c-474af0967b11 00:12:07.280 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=df272c1f-044f-40aa-a76c-474af0967b11 00:12:07.280 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.280 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:07.280 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.280 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.280 13:04:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:07.538 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df272c1f-044f-40aa-a76c-474af0967b11 -t 2000 00:12:07.795 [ 00:12:07.795 { 00:12:07.795 "name": "df272c1f-044f-40aa-a76c-474af0967b11", 00:12:07.795 "aliases": [ 00:12:07.795 "lvs/lvol" 00:12:07.795 ], 00:12:07.795 "product_name": "Logical Volume", 00:12:07.795 "block_size": 4096, 00:12:07.795 "num_blocks": 38912, 00:12:07.795 "uuid": "df272c1f-044f-40aa-a76c-474af0967b11", 00:12:07.795 "assigned_rate_limits": { 00:12:07.795 "rw_ios_per_sec": 0, 00:12:07.795 "rw_mbytes_per_sec": 0, 00:12:07.795 "r_mbytes_per_sec": 0, 00:12:07.795 "w_mbytes_per_sec": 0 00:12:07.795 }, 00:12:07.795 "claimed": false, 00:12:07.795 "zoned": false, 00:12:07.795 "supported_io_types": { 00:12:07.795 "read": true, 00:12:07.795 "write": true, 00:12:07.795 "unmap": true, 00:12:07.795 "flush": false, 00:12:07.795 "reset": true, 00:12:07.795 "nvme_admin": false, 00:12:07.795 "nvme_io": false, 00:12:07.795 "nvme_io_md": false, 00:12:07.795 "write_zeroes": true, 00:12:07.795 "zcopy": false, 00:12:07.795 "get_zone_info": false, 00:12:07.795 "zone_management": false, 00:12:07.795 "zone_append": false, 00:12:07.795 "compare": false, 00:12:07.795 "compare_and_write": false, 00:12:07.795 "abort": false, 00:12:07.795 "seek_hole": true, 00:12:07.795 "seek_data": true, 00:12:07.795 "copy": false, 00:12:07.795 "nvme_iov_md": false 00:12:07.795 }, 00:12:07.795 "driver_specific": { 00:12:07.795 "lvol": { 00:12:07.795 "lvol_store_uuid": "40f6df54-964e-4083-b1a3-86ce4748264c", 00:12:07.795 "base_bdev": "aio_bdev", 00:12:07.795 "thin_provision": false, 00:12:07.795 "num_allocated_clusters": 38, 00:12:07.795 "snapshot": false, 00:12:07.795 "clone": false, 00:12:07.795 "esnap_clone": false 00:12:07.795 } 00:12:07.795 } 00:12:07.795 } 00:12:07.795 ] 00:12:07.795 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:07.795 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:07.795 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:08.052 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:08.052 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:08.052 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:08.310 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:08.310 13:04:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df272c1f-044f-40aa-a76c-474af0967b11 00:12:08.567 13:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40f6df54-964e-4083-b1a3-86ce4748264c 00:12:08.825 13:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.082 00:12:09.082 real 0m19.380s 00:12:09.082 user 0m49.131s 00:12:09.082 sys 0m4.648s 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:09.082 ************************************ 00:12:09.082 END TEST lvs_grow_dirty 00:12:09.082 ************************************ 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:09.082 nvmf_trace.0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.082 rmmod nvme_tcp 00:12:09.082 rmmod nvme_fabrics 00:12:09.082 rmmod nvme_keyring 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3796669 ']' 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3796669 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3796669 ']' 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3796669 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796669 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796669' 00:12:09.082 killing process with pid 3796669 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3796669 00:12:09.082 13:04:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3796669 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.340 13:04:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.866 13:04:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.866 00:12:11.866 real 0m42.187s 00:12:11.866 user 1m11.853s 00:12:11.866 sys 0m8.397s 00:12:11.866 13:04:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.866 13:04:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.866 ************************************ 00:12:11.866 END TEST nvmf_lvs_grow 00:12:11.866 ************************************ 00:12:11.866 13:04:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:11.866 13:04:33 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:11.866 13:04:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.866 13:04:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.866 13:04:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.866 ************************************ 00:12:11.866 START TEST nvmf_bdev_io_wait 00:12:11.866 ************************************ 00:12:11.866 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:11.866 * Looking for test storage... 00:12:11.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.866 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.866 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:11.866 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:11.867 13:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:13.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:13.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:13.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:13.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.767 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:13.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:12:13.768 00:12:13.768 --- 10.0.0.2 ping statistics --- 00:12:13.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.768 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:12:13.768 00:12:13.768 --- 10.0.0.1 ping statistics --- 00:12:13.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.768 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3799191 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3799191 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3799191 ']' 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.768 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:13.768 [2024-07-15 13:04:35.332093] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:13.768 [2024-07-15 13:04:35.332176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.768 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.768 [2024-07-15 13:04:35.399840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.027 [2024-07-15 13:04:35.514976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.027 [2024-07-15 13:04:35.515032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.027 [2024-07-15 13:04:35.515045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.027 [2024-07-15 13:04:35.515056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.027 [2024-07-15 13:04:35.515065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.027 [2024-07-15 13:04:35.515144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.027 [2024-07-15 13:04:35.515169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.027 [2024-07-15 13:04:35.515227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.027 [2024-07-15 13:04:35.515229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.027 [2024-07-15 13:04:35.659852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.027 Malloc0 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.027 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.028 [2024-07-15 13:04:35.720463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3799219 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3799221 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.028 { 00:12:14.028 "params": { 00:12:14.028 "name": "Nvme$subsystem", 00:12:14.028 "trtype": "$TEST_TRANSPORT", 00:12:14.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.028 "adrfam": "ipv4", 00:12:14.028 "trsvcid": "$NVMF_PORT", 00:12:14.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.028 "hdgst": ${hdgst:-false}, 00:12:14.028 "ddgst": ${ddgst:-false} 00:12:14.028 }, 00:12:14.028 "method": "bdev_nvme_attach_controller" 00:12:14.028 } 00:12:14.028 EOF 00:12:14.028 )") 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3799223 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:14.028 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.286 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.287 { 00:12:14.287 "params": { 00:12:14.287 "name": "Nvme$subsystem", 00:12:14.287 "trtype": "$TEST_TRANSPORT", 00:12:14.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.287 "adrfam": "ipv4", 00:12:14.287 "trsvcid": "$NVMF_PORT", 00:12:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.287 "hdgst": ${hdgst:-false}, 00:12:14.287 "ddgst": ${ddgst:-false} 00:12:14.287 }, 00:12:14.287 "method": "bdev_nvme_attach_controller" 00:12:14.287 } 00:12:14.287 EOF 00:12:14.287 )") 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3799227 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.287 { 00:12:14.287 "params": { 00:12:14.287 "name": "Nvme$subsystem", 00:12:14.287 "trtype": "$TEST_TRANSPORT", 00:12:14.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.287 "adrfam": "ipv4", 00:12:14.287 "trsvcid": "$NVMF_PORT", 00:12:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.287 "hdgst": ${hdgst:-false}, 00:12:14.287 "ddgst": ${ddgst:-false} 00:12:14.287 }, 00:12:14.287 "method": "bdev_nvme_attach_controller" 00:12:14.287 } 00:12:14.287 EOF 00:12:14.287 )") 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:14.287 { 00:12:14.287 "params": { 00:12:14.287 "name": "Nvme$subsystem", 00:12:14.287 "trtype": "$TEST_TRANSPORT", 00:12:14.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.287 "adrfam": "ipv4", 00:12:14.287 "trsvcid": "$NVMF_PORT", 00:12:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.287 "hdgst": ${hdgst:-false}, 00:12:14.287 "ddgst": ${ddgst:-false} 00:12:14.287 }, 00:12:14.287 "method": "bdev_nvme_attach_controller" 00:12:14.287 } 00:12:14.287 EOF 00:12:14.287 )") 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3799219 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.287 "params": { 00:12:14.287 "name": "Nvme1", 00:12:14.287 "trtype": "tcp", 00:12:14.287 "traddr": "10.0.0.2", 00:12:14.287 "adrfam": "ipv4", 00:12:14.287 "trsvcid": "4420", 00:12:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.287 "hdgst": false, 00:12:14.287 "ddgst": false 00:12:14.287 }, 00:12:14.287 "method": "bdev_nvme_attach_controller" 00:12:14.287 }' 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.287 "params": { 00:12:14.287 "name": "Nvme1", 00:12:14.287 "trtype": "tcp", 00:12:14.287 "traddr": "10.0.0.2", 00:12:14.287 "adrfam": "ipv4", 00:12:14.287 "trsvcid": "4420", 00:12:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.287 "hdgst": false, 00:12:14.287 "ddgst": false 00:12:14.287 }, 00:12:14.287 "method": "bdev_nvme_attach_controller" 00:12:14.287 }' 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.287 "params": { 00:12:14.287 "name": "Nvme1", 00:12:14.287 "trtype": "tcp", 00:12:14.287 "traddr": "10.0.0.2", 00:12:14.287 "adrfam": "ipv4", 00:12:14.287 "trsvcid": "4420", 00:12:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.287 "hdgst": false, 00:12:14.287 "ddgst": false 00:12:14.287 }, 00:12:14.287 "method": "bdev_nvme_attach_controller" 00:12:14.287 }' 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:14.287 13:04:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:14.287 "params": { 00:12:14.287 "name": "Nvme1", 00:12:14.287 "trtype": "tcp", 00:12:14.287 "traddr": "10.0.0.2", 00:12:14.287 "adrfam": "ipv4", 00:12:14.287 "trsvcid": "4420", 00:12:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.287 "hdgst": false, 00:12:14.287 "ddgst": false 00:12:14.287 }, 00:12:14.287 "method": "bdev_nvme_attach_controller" 00:12:14.287 }' 00:12:14.287 [2024-07-15 13:04:35.767663] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:14.287 [2024-07-15 13:04:35.767745] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:14.287 [2024-07-15 13:04:35.768023] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:14.287 [2024-07-15 13:04:35.768025] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:14.287 [2024-07-15 13:04:35.768025] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:14.287 [2024-07-15 13:04:35.768107] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 13:04:35.768107] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 13:04:35.768108] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:14.287 --proc-type=auto ] 00:12:14.287 --proc-type=auto ] 00:12:14.287 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.287 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.287 [2024-07-15 13:04:35.946538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.545 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.545 [2024-07-15 13:04:36.043576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:14.545 [2024-07-15 13:04:36.044718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.545 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.545 [2024-07-15 13:04:36.141054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:14.545 [2024-07-15 13:04:36.142894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.545 [2024-07-15 13:04:36.210020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.545 [2024-07-15 13:04:36.238507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:14.802 [2024-07-15 13:04:36.306625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:14.802 Running I/O for 1 seconds... 00:12:14.802 Running I/O for 1 seconds... 00:12:15.060 Running I/O for 1 seconds... 00:12:15.060 Running I/O for 1 seconds... 00:12:15.992 00:12:15.992 Latency(us) 00:12:15.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.992 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:15.992 Nvme1n1 : 1.01 11721.30 45.79 0.00 0.00 10879.98 6213.78 20291.89 00:12:15.992 =================================================================================================================== 00:12:15.992 Total : 11721.30 45.79 0.00 0.00 10879.98 6213.78 20291.89 00:12:15.992 00:12:15.992 Latency(us) 00:12:15.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.992 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:15.992 Nvme1n1 : 1.02 5731.91 22.39 0.00 0.00 22116.00 8252.68 33010.73 00:12:15.992 =================================================================================================================== 00:12:15.992 Total : 5731.91 22.39 0.00 0.00 22116.00 8252.68 33010.73 00:12:15.992 00:12:15.992 Latency(us) 00:12:15.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.992 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:15.992 Nvme1n1 : 1.00 61786.75 241.35 0.00 0.00 2061.83 288.24 4538.97 00:12:15.992 =================================================================================================================== 00:12:15.992 Total : 61786.75 241.35 0.00 0.00 2061.83 288.24 4538.97 00:12:15.992 00:12:15.992 Latency(us) 00:12:15.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.992 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:15.992 Nvme1n1 : 1.01 6069.54 23.71 0.00 0.00 21010.12 4296.25 46603.38 00:12:15.992 =================================================================================================================== 00:12:15.992 Total : 6069.54 23.71 0.00 0.00 21010.12 4296.25 46603.38 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3799221 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3799223 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3799227 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.249 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.249 rmmod nvme_tcp 00:12:16.249 rmmod nvme_fabrics 00:12:16.249 rmmod nvme_keyring 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3799191 ']' 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3799191 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3799191 ']' 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3799191 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799191 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799191' 00:12:16.506 killing process with pid 3799191 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3799191 00:12:16.506 13:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3799191 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.764 13:04:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.663 13:04:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.663 00:12:18.663 real 0m7.187s 00:12:18.663 user 0m15.849s 00:12:18.663 sys 0m3.603s 00:12:18.663 13:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.663 13:04:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:18.663 ************************************ 00:12:18.663 END TEST nvmf_bdev_io_wait 00:12:18.663 ************************************ 00:12:18.663 13:04:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:18.663 13:04:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:18.663 13:04:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.663 13:04:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.663 13:04:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.663 ************************************ 00:12:18.663 START TEST nvmf_queue_depth 00:12:18.663 ************************************ 00:12:18.663 13:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:18.921 * Looking for test storage... 00:12:18.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:18.921 13:04:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.818 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:20.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:20.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:20.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:20.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.819 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:12:21.077 00:12:21.077 --- 10.0.0.2 ping statistics --- 00:12:21.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.077 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:21.077 00:12:21.077 --- 10.0.0.1 ping statistics --- 00:12:21.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.077 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3801510 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3801510 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3801510 ']' 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.077 13:04:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.077 [2024-07-15 13:04:42.653291] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:21.077 [2024-07-15 13:04:42.653383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.077 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.077 [2024-07-15 13:04:42.725995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.336 [2024-07-15 13:04:42.841810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.336 [2024-07-15 13:04:42.841873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.336 [2024-07-15 13:04:42.841908] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.336 [2024-07-15 13:04:42.841922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.336 [2024-07-15 13:04:42.841933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.336 [2024-07-15 13:04:42.841963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.924 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:21.924 [2024-07-15 13:04:43.620887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.183 Malloc0 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.183 [2024-07-15 13:04:43.682492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3801615 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3801615 /var/tmp/bdevperf.sock 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3801615 ']' 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:22.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.183 13:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.183 [2024-07-15 13:04:43.732393] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:22.183 [2024-07-15 13:04:43.732467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801615 ] 00:12:22.183 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.183 [2024-07-15 13:04:43.800765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.441 [2024-07-15 13:04:43.917791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.441 13:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.441 13:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:22.441 13:04:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:22.441 13:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.441 13:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.441 NVMe0n1 00:12:22.441 13:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.441 13:04:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:22.698 Running I/O for 10 seconds... 00:12:32.669 00:12:32.669 Latency(us) 00:12:32.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.669 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:32.669 Verification LBA range: start 0x0 length 0x4000 00:12:32.669 NVMe0n1 : 10.09 8411.78 32.86 0.00 0.00 121223.92 24758.04 74177.04 00:12:32.669 =================================================================================================================== 00:12:32.669 Total : 8411.78 32.86 0.00 0.00 121223.92 24758.04 74177.04 00:12:32.669 { 00:12:32.669 "core_count": 1, 00:12:32.669 "test_results": [ 00:12:32.669 { 00:12:32.669 "job": "NVMe0n1", 00:12:32.669 "test_status": "finished", 00:12:32.669 "core_mask": "0x1", 00:12:32.669 "workload": "verify", 00:12:32.669 "verify_LBA_range_start": 0, 00:12:32.669 "verify_LBA_range_len": 16384, 00:12:32.669 "queue_depth": 1024, 00:12:32.669 "io_size": 4096, 00:12:32.669 "runtime": 10.08835220336914, 00:12:32.669 "io_per_second": 8411.780239230353, 00:12:32.669 "MiB_per_second": 32.858516559493566, 00:12:32.669 "fails_per_second": 0.0, 00:12:32.669 "timeout_per_second": 0.0, 00:12:32.669 "average_latency_us": 121223.91634040329, 00:12:32.669 "min_latency_us": 24758.044444444444, 00:12:32.669 "max_latency_us": 74177.04296296297 00:12:32.669 } 00:12:32.669 ] 00:12:32.669 } 00:12:32.669 13:04:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3801615 00:12:32.669 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3801615 ']' 00:12:32.669 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3801615 00:12:32.669 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:32.669 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:32.669 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3801615 00:12:32.928 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:32.928 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:32.928 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3801615' 00:12:32.928 killing process with pid 3801615 00:12:32.928 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3801615 00:12:32.928 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.928 00:12:32.928 Latency(us) 00:12:32.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.928 =================================================================================================================== 00:12:32.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:32.928 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3801615 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.187 rmmod nvme_tcp 00:12:33.187 rmmod nvme_fabrics 00:12:33.187 rmmod nvme_keyring 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3801510 ']' 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3801510 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3801510 ']' 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3801510 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3801510 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3801510' 00:12:33.187 killing process with pid 3801510 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3801510 00:12:33.187 13:04:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3801510 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.446 13:04:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.987 13:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.987 00:12:35.987 real 0m16.782s 00:12:35.987 user 0m23.501s 00:12:35.987 sys 0m3.061s 00:12:35.987 13:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.987 13:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 ************************************ 00:12:35.987 END TEST nvmf_queue_depth 00:12:35.987 ************************************ 00:12:35.987 13:04:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:35.987 13:04:57 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:35.987 13:04:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.987 13:04:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.987 13:04:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 ************************************ 00:12:35.987 START TEST nvmf_target_multipath 00:12:35.987 ************************************ 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:35.987 * Looking for test storage... 00:12:35.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.987 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.988 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.988 13:04:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.988 13:04:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.903 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:37.904 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:37.904 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:37.904 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:37.904 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:12:37.904 00:12:37.904 --- 10.0.0.2 ping statistics --- 00:12:37.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.904 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:12:37.904 00:12:37.904 --- 10.0.0.1 ping statistics --- 00:12:37.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.904 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:37.904 only one NIC for nvmf test 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.904 rmmod nvme_tcp 00:12:37.904 rmmod nvme_fabrics 00:12:37.904 rmmod nvme_keyring 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.904 13:04:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.809 00:12:39.809 real 0m4.315s 00:12:39.809 user 0m0.798s 00:12:39.809 sys 0m1.502s 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.809 13:05:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:39.809 ************************************ 00:12:39.809 END TEST nvmf_target_multipath 00:12:39.809 ************************************ 00:12:39.809 13:05:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.809 13:05:01 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:39.809 13:05:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.809 13:05:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.809 13:05:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:40.068 ************************************ 00:12:40.068 START TEST nvmf_zcopy 00:12:40.068 ************************************ 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:40.068 * Looking for test storage... 00:12:40.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:40.068 13:05:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.970 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.970 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.970 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.970 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.970 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.970 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.970 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.971 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:12:42.230 00:12:42.230 --- 10.0.0.2 ping statistics --- 00:12:42.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.230 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:42.230 00:12:42.230 --- 10.0.0.1 ping statistics --- 00:12:42.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.230 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3806769 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3806769 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3806769 ']' 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.230 13:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.230 [2024-07-15 13:05:03.762840] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:42.230 [2024-07-15 13:05:03.762944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.230 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.230 [2024-07-15 13:05:03.828057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.489 [2024-07-15 13:05:03.938486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.489 [2024-07-15 13:05:03.938535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.489 [2024-07-15 13:05:03.938564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.489 [2024-07-15 13:05:03.938576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.489 [2024-07-15 13:05:03.938587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.489 [2024-07-15 13:05:03.938618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 [2024-07-15 13:05:04.091849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 [2024-07-15 13:05:04.108043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 malloc0 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:42.489 { 00:12:42.489 "params": { 00:12:42.489 "name": "Nvme$subsystem", 00:12:42.489 "trtype": "$TEST_TRANSPORT", 00:12:42.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.489 "adrfam": "ipv4", 00:12:42.489 "trsvcid": "$NVMF_PORT", 00:12:42.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.489 "hdgst": ${hdgst:-false}, 00:12:42.489 "ddgst": ${ddgst:-false} 00:12:42.489 }, 00:12:42.489 "method": "bdev_nvme_attach_controller" 00:12:42.489 } 00:12:42.489 EOF 00:12:42.489 )") 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:42.489 13:05:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:42.489 "params": { 00:12:42.489 "name": "Nvme1", 00:12:42.489 "trtype": "tcp", 00:12:42.489 "traddr": "10.0.0.2", 00:12:42.489 "adrfam": "ipv4", 00:12:42.489 "trsvcid": "4420", 00:12:42.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.489 "hdgst": false, 00:12:42.489 "ddgst": false 00:12:42.489 }, 00:12:42.489 "method": "bdev_nvme_attach_controller" 00:12:42.489 }' 00:12:42.489 [2024-07-15 13:05:04.188371] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:42.489 [2024-07-15 13:05:04.188449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806835 ] 00:12:42.748 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.748 [2024-07-15 13:05:04.252726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.748 [2024-07-15 13:05:04.368655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.006 Running I/O for 10 seconds... 00:12:52.981 00:12:52.981 Latency(us) 00:12:52.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:52.981 Verification LBA range: start 0x0 length 0x1000 00:12:52.981 Nvme1n1 : 10.02 5695.85 44.50 0.00 0.00 22407.41 3980.71 33787.45 00:12:52.981 =================================================================================================================== 00:12:52.981 Total : 5695.85 44.50 0.00 0.00 22407.41 3980.71 33787.45 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3808116 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.239 { 00:12:53.239 "params": { 00:12:53.239 "name": "Nvme$subsystem", 00:12:53.239 "trtype": "$TEST_TRANSPORT", 00:12:53.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.239 "adrfam": "ipv4", 00:12:53.239 "trsvcid": "$NVMF_PORT", 00:12:53.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.239 "hdgst": ${hdgst:-false}, 00:12:53.239 "ddgst": ${ddgst:-false} 00:12:53.239 }, 00:12:53.239 "method": "bdev_nvme_attach_controller" 00:12:53.239 } 00:12:53.239 EOF 00:12:53.239 )") 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:53.239 [2024-07-15 13:05:14.871107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.871151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:53.239 13:05:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.239 "params": { 00:12:53.239 "name": "Nvme1", 00:12:53.239 "trtype": "tcp", 00:12:53.239 "traddr": "10.0.0.2", 00:12:53.239 "adrfam": "ipv4", 00:12:53.239 "trsvcid": "4420", 00:12:53.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.239 "hdgst": false, 00:12:53.239 "ddgst": false 00:12:53.239 }, 00:12:53.239 "method": "bdev_nvme_attach_controller" 00:12:53.239 }' 00:12:53.239 [2024-07-15 13:05:14.879073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.879100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-07-15 13:05:14.887081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.887103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-07-15 13:05:14.895100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.895121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-07-15 13:05:14.903123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.903143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-07-15 13:05:14.906868] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:12:53.239 [2024-07-15 13:05:14.906954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808116 ] 00:12:53.239 [2024-07-15 13:05:14.911147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.911193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-07-15 13:05:14.919181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.919200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-07-15 13:05:14.927204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.927238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.239 [2024-07-15 13:05:14.935247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-07-15 13:05:14.935274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.943257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.943281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.951289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.951313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.959301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.959325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.967324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.967348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.969229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.499 [2024-07-15 13:05:14.975369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.975400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.983402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.983440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.991390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.991414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:14.999412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:14.999436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.007433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.007457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.015454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.015478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.023475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.023499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.031508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.031535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.039549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.039587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.047544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.047568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.055564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.055589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.063586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.063610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.071608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.071632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.079629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.079654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.086635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.499 [2024-07-15 13:05:15.087650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.087674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.095674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.095701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.103718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.103753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.111752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.111791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.119763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.499 [2024-07-15 13:05:15.119802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.499 [2024-07-15 13:05:15.127788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.127827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.135807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.135847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.143827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.143866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.151849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.151895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.159844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.159869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.167894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.167942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.175933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.175967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.183931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.183954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.500 [2024-07-15 13:05:15.191949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.500 [2024-07-15 13:05:15.191980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.199971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.199993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.207997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.208022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.216019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.216044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.224040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.224063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.232044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.232067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.240100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.240125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.248090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.248112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.256108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.256130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.264129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.264163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.307182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.307211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.312295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.312322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 Running I/O for 5 seconds... 00:12:53.760 [2024-07-15 13:05:15.320322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.320347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.332776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.332804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.342646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.342674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.354314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.354342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.364789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.364816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.375567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.375595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.386523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.386551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.397424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.397458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.410639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.410666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.421445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.421472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.431962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.431989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.442953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.442980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.760 [2024-07-15 13:05:15.453606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.760 [2024-07-15 13:05:15.453634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.466258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.466286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.476499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.476526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.487040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.487067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.499991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.500020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.510237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.510282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.521142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.521181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.534194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.534222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.544651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.544681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.555425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.555452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.568184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.568212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.578343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.578371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.588996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.589023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.601804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.601840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.611721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.611749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.623185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.623213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.633813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.633840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.644483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.644511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.657151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.657179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.667011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.667040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.677773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.677800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.690178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.690206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.700068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.700110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.021 [2024-07-15 13:05:15.710796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.021 [2024-07-15 13:05:15.710822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.721729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.721756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.734186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.734213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.744425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.744451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.755706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.755736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.769178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.769208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.780249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.780279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.791717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.791747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.803092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.803122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.814131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.814170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.825560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.825590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.837068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.837098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.848167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.848197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.859660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.859690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.873271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.873302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.884001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.884031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.895611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.895641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.907543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.907573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.919107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.919137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.932345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.932375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.942716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.942746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.954050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.954079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.965118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.965148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.281 [2024-07-15 13:05:15.976747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.281 [2024-07-15 13:05:15.976778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.540 [2024-07-15 13:05:15.988148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.540 [2024-07-15 13:05:15.988178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.540 [2024-07-15 13:05:15.999214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.540 [2024-07-15 13:05:15.999244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.540 [2024-07-15 13:05:16.012593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.540 [2024-07-15 13:05:16.012623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.540 [2024-07-15 13:05:16.023544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.540 [2024-07-15 13:05:16.023574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.540 [2024-07-15 13:05:16.035127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.540 [2024-07-15 13:05:16.035166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.045978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.046007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.057082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.057113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.070010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.070040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.080917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.080947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.092772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.092802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.104069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.104099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.115532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.115562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.127366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.127397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.138169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.138201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.149697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.149727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.160996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.161026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.172166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.172196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.185435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.185465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.196810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.196840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.207793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.207823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.218473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.218503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.229607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.541 [2024-07-15 13:05:16.229637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.541 [2024-07-15 13:05:16.240910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.240940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.252411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.252461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.263904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.263934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.275864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.275903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.287513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.287543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.298945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.298974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.310373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.310402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.321681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.321711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.332948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.332978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.344430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.344460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.355075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.355104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.366034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.366064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.379195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.379225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.389413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.389443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.401020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.401049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.412200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.412229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.423678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.423708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.435204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.435233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.446970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.447000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.458091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.458121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.469418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.469455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.482140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.482170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.802 [2024-07-15 13:05:16.492516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.802 [2024-07-15 13:05:16.492546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.504674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.504705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.515862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.515902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.528637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.528667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.538825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.538855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.550998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.551028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.562556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.562586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.576144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.576174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.586927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.586957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.598445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.598475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.611814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.611845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.623180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.623210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.634313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.634343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.645788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.645817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.657066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.657098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.668374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.668405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.679369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.679400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.690828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.690867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.704153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.704183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.714836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.714866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.726009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.726040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.739176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.739206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.749710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.749740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.761178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.761208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.074 [2024-07-15 13:05:16.772402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.074 [2024-07-15 13:05:16.772432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.783817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.783847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.795389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.795419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.807044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.807075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.818192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.818221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.829199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.829229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.840730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.840760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.852599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.852629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.863984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.864014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.875578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.875608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.888739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.888769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.899184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.899214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.911108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.911138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.922119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.922149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.933347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.933377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.946462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.946492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.956872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.956909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.968290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.968320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.979611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.979642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:16.990994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:16.991024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:17.002018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:17.002048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:17.013433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:17.013464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.334 [2024-07-15 13:05:17.024635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.334 [2024-07-15 13:05:17.024666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.592 [2024-07-15 13:05:17.035468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.592 [2024-07-15 13:05:17.035498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.592 [2024-07-15 13:05:17.047311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.592 [2024-07-15 13:05:17.047341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.592 [2024-07-15 13:05:17.059236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.592 [2024-07-15 13:05:17.059266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.070921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.070951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.082190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.082220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.093815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.093845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.105055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.105084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.116572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.116602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.128124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.128154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.139564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.139594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.151075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.151105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.162410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.162440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.172985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.173015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.184400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.184431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.197299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.197329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.207736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.207765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.218982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.219012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.232164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.232194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.243095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.243125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.253762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.253792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.265232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.265263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.276744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.276774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.593 [2024-07-15 13:05:17.288047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.593 [2024-07-15 13:05:17.288078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.299116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.299146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.310448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.310478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.321586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.321616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.333322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.333352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.344705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.344735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.356101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.356131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.367333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.367364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.378869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.378909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.390338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.390368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.403341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.403371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.414224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.414254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.425517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.425547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.437120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.437160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.448493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.448523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.459739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.459769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.471179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.471208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.482554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.482584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.493748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.493779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.505393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.505423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.517318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.517347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.529036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.529067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.540793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.540823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.853 [2024-07-15 13:05:17.552455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.853 [2024-07-15 13:05:17.552497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.564169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.564199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.575706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.575736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.586406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.586433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.596990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.597017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.609529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.609556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.619865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.619905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.630933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.630961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.641844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.641871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.652369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.652395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.663348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.663375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.674278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.674305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.684588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.684614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.694731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.694758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.704810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.704837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.715077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.715104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.725683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.725710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.735927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.735954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.746658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.746685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.759071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.759104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.769402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.769429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.779349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.779376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.789472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.789499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.112 [2024-07-15 13:05:17.800269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.112 [2024-07-15 13:05:17.800319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.812981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.813010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.823440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.823468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.833561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.833589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.844284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.844311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.854773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.854800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.865475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.865502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.877718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.877745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.887409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.887437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.898253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.898281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.908556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.908584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.919096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.919124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.929768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.929797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.939974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.940002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.950443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.950470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.961079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.961117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.971897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.971932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.982521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.982548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:17.995064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:17.995091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:18.004928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:18.004955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:18.016155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:18.016182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:18.029318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:18.029346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:18.039304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:18.039331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:18.050338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:18.050365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.372 [2024-07-15 13:05:18.062947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.372 [2024-07-15 13:05:18.062975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.072327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.072356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.083875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.083909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.096563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.096590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.106765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.106793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.117373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.117400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.130515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.130542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.140468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.140495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.151310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.151337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.164128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.164154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.174204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.174242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.185695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.185722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.196798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.196825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.207690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.207717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.220127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.220157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.229599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.229626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.240783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.240810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.251849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.251883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.264168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.264195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.274251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.274278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.285058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.285085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.296024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.296052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.308861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.308896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.319315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.319342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.632 [2024-07-15 13:05:18.329856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.632 [2024-07-15 13:05:18.329891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.891 [2024-07-15 13:05:18.340602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.891 [2024-07-15 13:05:18.340630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.891 [2024-07-15 13:05:18.351167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.891 [2024-07-15 13:05:18.351194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.891 [2024-07-15 13:05:18.361581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.891 [2024-07-15 13:05:18.361607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.891 [2024-07-15 13:05:18.372491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.372520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.384362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.384400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.395538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.395568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.406956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.406985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.418343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.418372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.429820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.429849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.440817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.440847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.452071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.452101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.463409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.463439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.475072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.475101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.486420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.486451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.499734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.499764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.510367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.510397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.521923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.521953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.535071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.535101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.545936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.545967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.557012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.557042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.568194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.568223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.579930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.579960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.892 [2024-07-15 13:05:18.591215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.892 [2024-07-15 13:05:18.591245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.150 [2024-07-15 13:05:18.602489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.150 [2024-07-15 13:05:18.602520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.150 [2024-07-15 13:05:18.613176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.150 [2024-07-15 13:05:18.613206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.150 [2024-07-15 13:05:18.623987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.150 [2024-07-15 13:05:18.624016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.150 [2024-07-15 13:05:18.635412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.150 [2024-07-15 13:05:18.635443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.150 [2024-07-15 13:05:18.648457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.150 [2024-07-15 13:05:18.648487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.150 [2024-07-15 13:05:18.658706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.150 [2024-07-15 13:05:18.658737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.150 [2024-07-15 13:05:18.670608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.150 [2024-07-15 13:05:18.670638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.682269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.682299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.694101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.694140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.705677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.705708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.717080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.717109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.728711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.728741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.740414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.740443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.751974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.752004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.763638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.763668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.775117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.775148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.786518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.786549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.799655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.799685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.810205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.810236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.821404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.821434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.832751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.832781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.151 [2024-07-15 13:05:18.843782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.151 [2024-07-15 13:05:18.843812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.857172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.857203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.867359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.867388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.878815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.878844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.890380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.890411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.903516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.903546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.915016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.915046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.926284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.926314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.937564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.937594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.950418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.950449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.961443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.961473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.972747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.972777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.986073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.986104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:18.996207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:18.996237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.007789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.007819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.018802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.018833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.030430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.030461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.042067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.042098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.053524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.053554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.065046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.065077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.076426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.076456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.089761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.089791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.409 [2024-07-15 13:05:19.100653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.409 [2024-07-15 13:05:19.100683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.666 [2024-07-15 13:05:19.112177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.666 [2024-07-15 13:05:19.112207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.666 [2024-07-15 13:05:19.123806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.123836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.135277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.135307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.146902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.146931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.158071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.158100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.169937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.169967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.181296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.181326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.194464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.194494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.204915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.204945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.215902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.215931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.229289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.229320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.239696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.239726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.251294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.251333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.263002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.263032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.273747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.273777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.285471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.285502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.296845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.296885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.308737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.308767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.319990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.320020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.331842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.331873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.343142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.343173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.354628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.354658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.667 [2024-07-15 13:05:19.366238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.667 [2024-07-15 13:05:19.366269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.377451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.377482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.389070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.389100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.400231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.400261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.411573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.411604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.423388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.423418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.434923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.434953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.446365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.446395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.457970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.458000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.469055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.469093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.480180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.480210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.491668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.491698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.503030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.503060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.513927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.513957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.525462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.525492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.536597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.536627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.548084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.548114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.559647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.559677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.571070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.571100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.582127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.582156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.595157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.595188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.605974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.606004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.924 [2024-07-15 13:05:19.617161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.924 [2024-07-15 13:05:19.617191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.183 [2024-07-15 13:05:19.630150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.183 [2024-07-15 13:05:19.630181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.183 [2024-07-15 13:05:19.640837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.183 [2024-07-15 13:05:19.640867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.183 [2024-07-15 13:05:19.652447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.183 [2024-07-15 13:05:19.652478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.663604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.663633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.675379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.675410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.687371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.687409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.698812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.698842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.710366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.710396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.722079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.722108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.733856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.733896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.745442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.745472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.758550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.758580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.769100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.769130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.780541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.780571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.791949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.791979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.803782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.803812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.815901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.815931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.827595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.827624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.839267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.839297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.852583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.852613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.184 [2024-07-15 13:05:19.873848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.184 [2024-07-15 13:05:19.873889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.885626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.885657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.896567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.896597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.908052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.908082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.919761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.919801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.931325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.931355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.942738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.942768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.954014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.954044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.965969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.965999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.977694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.977725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:19.989099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:19.989129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.000436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.000466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.017460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.017504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.028358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.028388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.040422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.040453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.052199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.052230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.063503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.063534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.075276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.075307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.087127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.087168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.099170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.099201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.110783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.110813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.122715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.122745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.443 [2024-07-15 13:05:20.134428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.443 [2024-07-15 13:05:20.134458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.146593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.146635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.158070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.158100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.169205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.169235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.180857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.180898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.192532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.192563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.204160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.204190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.215621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.215652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.227489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.227519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.239376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.239406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.250658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.250688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.262015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.262045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.273458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.273487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.284457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.284487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.295937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.295967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.309106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.309136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.319574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.319603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.331706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.331736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.341847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.341883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 00:12:58.702 Latency(us) 00:12:58.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.702 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:58.702 Nvme1n1 : 5.01 11305.37 88.32 0.00 0.00 11307.08 4951.61 19903.53 00:12:58.702 =================================================================================================================== 00:12:58.702 Total : 11305.37 88.32 0.00 0.00 11307.08 4951.61 19903.53 00:12:58.702 [2024-07-15 13:05:20.346413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.346440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.354436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.354463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.362470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.362496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.370505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.370546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.378543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.378592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.386573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.386619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.702 [2024-07-15 13:05:20.394584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.702 [2024-07-15 13:05:20.394629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.402598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.402644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.410627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.410676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.418652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.418699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.426673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.426719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.434695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.434742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.442727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.442776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.450737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.450784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.458763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.458811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.466785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.466831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.474807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.474854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.482821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.482867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.960 [2024-07-15 13:05:20.490839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.960 [2024-07-15 13:05:20.490893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.498827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.498851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.506847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.506870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.514869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.514901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.522897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.522920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.530929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.530955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.539002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.539046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.547008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.547052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.554980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.555005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.562999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.563023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.571019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.571042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.579041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.579065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.587067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.587091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.595138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.595186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.603155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.603200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.611129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.611153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.619154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.619178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 [2024-07-15 13:05:20.627175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.961 [2024-07-15 13:05:20.627198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3808116) - No such process 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3808116 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.961 delay0 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.961 13:05:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:59.218 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.218 [2024-07-15 13:05:20.795027] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:05.793 Initializing NVMe Controllers 00:13:05.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:05.793 Initialization complete. Launching workers. 00:13:05.793 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 272, failed: 9761 00:13:05.793 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9945, failed to submit 88 00:13:05.793 success 9832, unsuccess 113, failed 0 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.793 rmmod nvme_tcp 00:13:05.793 rmmod nvme_fabrics 00:13:05.793 rmmod nvme_keyring 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3806769 ']' 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3806769 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3806769 ']' 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3806769 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806769 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806769' 00:13:05.793 killing process with pid 3806769 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3806769 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3806769 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.793 13:05:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.334 13:05:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.334 00:13:08.334 real 0m27.995s 00:13:08.334 user 0m40.764s 00:13:08.334 sys 0m8.917s 00:13:08.334 13:05:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.334 13:05:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.334 ************************************ 00:13:08.334 END TEST nvmf_zcopy 00:13:08.334 ************************************ 00:13:08.334 13:05:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.334 13:05:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.334 13:05:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.334 13:05:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.334 13:05:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.334 ************************************ 00:13:08.334 START TEST nvmf_nmic 00:13:08.334 ************************************ 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:08.334 * Looking for test storage... 00:13:08.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.334 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.335 13:05:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.335 13:05:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:10.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:10.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:10.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:10.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.244 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:13:10.245 00:13:10.245 --- 10.0.0.2 ping statistics --- 00:13:10.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.245 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:13:10.245 00:13:10.245 --- 10.0.0.1 ping statistics --- 00:13:10.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.245 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3811478 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3811478 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3811478 ']' 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.245 13:05:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.245 [2024-07-15 13:05:31.652079] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:13:10.245 [2024-07-15 13:05:31.652182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.245 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.245 [2024-07-15 13:05:31.722698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.245 [2024-07-15 13:05:31.845564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.245 [2024-07-15 13:05:31.845635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.245 [2024-07-15 13:05:31.845652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.245 [2024-07-15 13:05:31.845665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.245 [2024-07-15 13:05:31.845676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.245 [2024-07-15 13:05:31.845798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.245 [2024-07-15 13:05:31.845865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.245 [2024-07-15 13:05:31.845936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.245 [2024-07-15 13:05:31.845931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.181 [2024-07-15 13:05:32.615025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.181 Malloc0 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.181 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.182 [2024-07-15 13:05:32.666761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:11.182 test case1: single bdev can't be used in multiple subsystems 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.182 [2024-07-15 13:05:32.690624] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:11.182 [2024-07-15 13:05:32.690653] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:11.182 [2024-07-15 13:05:32.690668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.182 request: 00:13:11.182 { 00:13:11.182 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:11.182 "namespace": { 00:13:11.182 "bdev_name": "Malloc0", 00:13:11.182 "no_auto_visible": false 00:13:11.182 }, 00:13:11.182 "method": "nvmf_subsystem_add_ns", 00:13:11.182 "req_id": 1 00:13:11.182 } 00:13:11.182 Got JSON-RPC error response 00:13:11.182 response: 00:13:11.182 { 00:13:11.182 "code": -32602, 00:13:11.182 "message": "Invalid parameters" 00:13:11.182 } 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:11.182 Adding namespace failed - expected result. 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:11.182 test case2: host connect to nvmf target in multiple paths 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:11.182 [2024-07-15 13:05:32.698728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.182 13:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.750 13:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:12.328 13:05:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.328 13:05:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.328 13:05:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.328 13:05:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:12.328 13:05:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.860 13:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.860 13:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.860 13:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.860 13:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:14.860 13:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.860 13:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:14.860 13:05:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:14.860 [global] 00:13:14.860 thread=1 00:13:14.860 invalidate=1 00:13:14.860 rw=write 00:13:14.860 time_based=1 00:13:14.860 runtime=1 00:13:14.860 ioengine=libaio 00:13:14.860 direct=1 00:13:14.860 bs=4096 00:13:14.860 iodepth=1 00:13:14.860 norandommap=0 00:13:14.860 numjobs=1 00:13:14.860 00:13:14.860 verify_dump=1 00:13:14.860 verify_backlog=512 00:13:14.860 verify_state_save=0 00:13:14.860 do_verify=1 00:13:14.860 verify=crc32c-intel 00:13:14.860 [job0] 00:13:14.860 filename=/dev/nvme0n1 00:13:14.860 Could not set queue depth (nvme0n1) 00:13:14.860 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.860 fio-3.35 00:13:14.860 Starting 1 thread 00:13:15.799 00:13:15.799 job0: (groupid=0, jobs=1): err= 0: pid=3812142: Mon Jul 15 13:05:37 2024 00:13:15.799 read: IOPS=1586, BW=6346KiB/s (6498kB/s)(6352KiB/1001msec) 00:13:15.799 slat (nsec): min=4674, max=63255, avg=18384.16, stdev=10345.89 00:13:15.799 clat (usec): min=252, max=546, avg=333.01, stdev=50.00 00:13:15.799 lat (usec): min=258, max=560, avg=351.40, stdev=54.50 00:13:15.799 clat percentiles (usec): 00:13:15.799 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 285], 00:13:15.799 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 347], 00:13:15.799 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 392], 95.00th=[ 408], 00:13:15.799 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 519], 99.95th=[ 545], 00:13:15.799 | 99.99th=[ 545] 00:13:15.799 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:15.799 slat (nsec): min=6252, max=62290, avg=13409.02, stdev=6966.05 00:13:15.799 clat (usec): min=163, max=389, avg=194.58, stdev=17.61 00:13:15.799 lat (usec): min=170, max=429, avg=207.99, stdev=22.26 00:13:15.799 clat percentiles (usec): 00:13:15.799 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:13:15.799 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:13:15.799 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 208], 95.00th=[ 217], 00:13:15.799 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 351], 00:13:15.799 | 99.99th=[ 392] 00:13:15.799 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:15.799 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:15.799 lat (usec) : 250=55.23%, 500=44.66%, 750=0.11% 00:13:15.799 cpu : usr=3.50%, sys=5.50%, ctx=3636, majf=0, minf=2 00:13:15.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.799 issued rwts: total=1588,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.799 00:13:15.799 Run status group 0 (all jobs): 00:13:15.799 READ: bw=6346KiB/s (6498kB/s), 6346KiB/s-6346KiB/s (6498kB/s-6498kB/s), io=6352KiB (6504kB), run=1001-1001msec 00:13:15.799 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:13:15.799 00:13:15.799 Disk stats (read/write): 00:13:15.799 nvme0n1: ios=1586/1650, merge=0/0, ticks=529/312, in_queue=841, util=92.18% 00:13:15.800 13:05:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.060 rmmod nvme_tcp 00:13:16.060 rmmod nvme_fabrics 00:13:16.060 rmmod nvme_keyring 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3811478 ']' 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3811478 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3811478 ']' 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3811478 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811478 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811478' 00:13:16.060 killing process with pid 3811478 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3811478 00:13:16.060 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3811478 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.319 13:05:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.859 13:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.859 00:13:18.859 real 0m10.398s 00:13:18.859 user 0m24.958s 00:13:18.859 sys 0m2.347s 00:13:18.859 13:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.859 13:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.859 ************************************ 00:13:18.859 END TEST nvmf_nmic 00:13:18.859 ************************************ 00:13:18.859 13:05:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.859 13:05:39 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:18.859 13:05:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:18.859 13:05:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.859 13:05:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.859 ************************************ 00:13:18.859 START TEST nvmf_fio_target 00:13:18.859 ************************************ 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:18.859 * Looking for test storage... 00:13:18.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.859 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.860 13:05:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.764 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:13:20.765 00:13:20.765 --- 10.0.0.2 ping statistics --- 00:13:20.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.765 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:13:20.765 00:13:20.765 --- 10.0.0.1 ping statistics --- 00:13:20.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.765 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3814216 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3814216 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3814216 ']' 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.765 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.765 [2024-07-15 13:05:42.269764] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:13:20.765 [2024-07-15 13:05:42.269843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.765 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.765 [2024-07-15 13:05:42.334933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.765 [2024-07-15 13:05:42.445902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.765 [2024-07-15 13:05:42.445960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.765 [2024-07-15 13:05:42.445973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.765 [2024-07-15 13:05:42.445985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.765 [2024-07-15 13:05:42.445995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.765 [2024-07-15 13:05:42.446085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.765 [2024-07-15 13:05:42.446159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.765 [2024-07-15 13:05:42.446198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.765 [2024-07-15 13:05:42.446201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.024 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.024 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:21.024 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.024 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.024 13:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.024 13:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.024 13:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:21.283 [2024-07-15 13:05:42.825535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.283 13:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:21.541 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:21.541 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:21.799 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:21.799 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.056 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:22.056 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.313 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:22.313 13:05:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:22.570 13:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.827 13:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:22.827 13:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.085 13:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:23.085 13:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.343 13:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:23.343 13:05:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:23.600 13:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:23.857 13:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:23.857 13:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.114 13:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.114 13:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.371 13:05:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.628 [2024-07-15 13:05:46.212426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.628 13:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:24.884 13:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:25.141 13:05:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.708 13:05:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:25.708 13:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:25.708 13:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.708 13:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:25.708 13:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:25.708 13:05:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:27.665 13:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:27.665 13:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:27.665 13:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.665 13:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:27.665 13:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.665 13:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:27.665 13:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:27.665 [global] 00:13:27.665 thread=1 00:13:27.665 invalidate=1 00:13:27.665 rw=write 00:13:27.665 time_based=1 00:13:27.665 runtime=1 00:13:27.665 ioengine=libaio 00:13:27.665 direct=1 00:13:27.665 bs=4096 00:13:27.665 iodepth=1 00:13:27.665 norandommap=0 00:13:27.665 numjobs=1 00:13:27.665 00:13:27.665 verify_dump=1 00:13:27.665 verify_backlog=512 00:13:27.665 verify_state_save=0 00:13:27.665 do_verify=1 00:13:27.665 verify=crc32c-intel 00:13:27.665 [job0] 00:13:27.665 filename=/dev/nvme0n1 00:13:27.666 [job1] 00:13:27.666 filename=/dev/nvme0n2 00:13:27.666 [job2] 00:13:27.666 filename=/dev/nvme0n3 00:13:27.666 [job3] 00:13:27.666 filename=/dev/nvme0n4 00:13:27.924 Could not set queue depth (nvme0n1) 00:13:27.924 Could not set queue depth (nvme0n2) 00:13:27.924 Could not set queue depth (nvme0n3) 00:13:27.924 Could not set queue depth (nvme0n4) 00:13:27.924 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.924 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.924 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.924 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.924 fio-3.35 00:13:27.924 Starting 4 threads 00:13:29.301 00:13:29.301 job0: (groupid=0, jobs=1): err= 0: pid=3815169: Mon Jul 15 13:05:50 2024 00:13:29.301 read: IOPS=70, BW=281KiB/s (287kB/s)(288KiB/1026msec) 00:13:29.301 slat (nsec): min=14659, max=60542, avg=24776.71, stdev=8424.79 00:13:29.301 clat (usec): min=345, max=44001, avg=12518.52, stdev=18894.79 00:13:29.301 lat (usec): min=377, max=44018, avg=12543.30, stdev=18895.02 00:13:29.301 clat percentiles (usec): 00:13:29.301 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 433], 20.00th=[ 457], 00:13:29.301 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 523], 00:13:29.301 | 70.00th=[ 709], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:29.301 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:29.301 | 99.99th=[43779] 00:13:29.301 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:13:29.301 slat (nsec): min=8312, max=36898, avg=16369.46, stdev=6104.34 00:13:29.301 clat (usec): min=178, max=388, avg=216.56, stdev=23.99 00:13:29.301 lat (usec): min=188, max=398, avg=232.93, stdev=27.27 00:13:29.301 clat percentiles (usec): 00:13:29.301 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:13:29.301 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:13:29.301 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 251], 00:13:29.301 | 99.00th=[ 306], 99.50th=[ 367], 99.90th=[ 388], 99.95th=[ 388], 00:13:29.301 | 99.99th=[ 388] 00:13:29.301 bw ( KiB/s): min= 4096, max= 4096, per=31.29%, avg=4096.00, stdev= 0.00, samples=1 00:13:29.301 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:29.301 lat (usec) : 250=83.05%, 500=10.62%, 750=2.74% 00:13:29.301 lat (msec) : 50=3.60% 00:13:29.301 cpu : usr=1.17%, sys=0.78%, ctx=586, majf=0, minf=1 00:13:29.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.301 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.301 job1: (groupid=0, jobs=1): err= 0: pid=3815170: Mon Jul 15 13:05:50 2024 00:13:29.301 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:29.301 slat (nsec): min=5951, max=48422, avg=13554.47, stdev=5954.03 00:13:29.301 clat (usec): min=274, max=41433, avg=1434.73, stdev=6409.96 00:13:29.301 lat (usec): min=280, max=41447, avg=1448.29, stdev=6411.88 00:13:29.301 clat percentiles (usec): 00:13:29.301 | 1.00th=[ 285], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 351], 00:13:29.301 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:13:29.301 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 469], 00:13:29.301 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:29.301 | 99.99th=[41681] 00:13:29.301 write: IOPS=797, BW=3189KiB/s (3265kB/s)(3192KiB/1001msec); 0 zone resets 00:13:29.301 slat (usec): min=7, max=20941, avg=45.82, stdev=740.71 00:13:29.301 clat (usec): min=171, max=631, avg=269.56, stdev=86.30 00:13:29.301 lat (usec): min=179, max=21379, avg=315.38, stdev=752.37 00:13:29.301 clat percentiles (usec): 00:13:29.301 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:13:29.301 | 30.00th=[ 206], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:13:29.301 | 70.00th=[ 277], 80.00th=[ 367], 90.00th=[ 416], 95.00th=[ 437], 00:13:29.301 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 635], 99.95th=[ 635], 00:13:29.301 | 99.99th=[ 635] 00:13:29.301 bw ( KiB/s): min= 4096, max= 4096, per=31.29%, avg=4096.00, stdev= 0.00, samples=1 00:13:29.301 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:29.301 lat (usec) : 250=39.08%, 500=59.01%, 750=0.61% 00:13:29.301 lat (msec) : 2=0.08%, 4=0.15%, 10=0.08%, 50=0.99% 00:13:29.301 cpu : usr=1.60%, sys=2.90%, ctx=1313, majf=0, minf=2 00:13:29.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.301 issued rwts: total=512,798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.301 job2: (groupid=0, jobs=1): err= 0: pid=3815171: Mon Jul 15 13:05:50 2024 00:13:29.301 read: IOPS=160, BW=642KiB/s (657kB/s)(644KiB/1003msec) 00:13:29.302 slat (nsec): min=5035, max=38687, avg=14137.94, stdev=6860.77 00:13:29.302 clat (usec): min=295, max=41095, avg=4960.81, stdev=12757.42 00:13:29.302 lat (usec): min=308, max=41115, avg=4974.95, stdev=12761.29 00:13:29.302 clat percentiles (usec): 00:13:29.302 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 359], 20.00th=[ 379], 00:13:29.302 | 30.00th=[ 388], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 424], 00:13:29.302 | 70.00th=[ 457], 80.00th=[ 494], 90.00th=[40633], 95.00th=[41157], 00:13:29.302 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:29.302 | 99.99th=[41157] 00:13:29.302 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:13:29.302 slat (usec): min=10, max=40670, avg=142.73, stdev=2025.84 00:13:29.302 clat (usec): min=207, max=446, avg=243.40, stdev=20.14 00:13:29.302 lat (usec): min=219, max=41116, avg=386.13, stdev=2035.22 00:13:29.302 clat percentiles (usec): 00:13:29.302 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:13:29.302 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 245], 00:13:29.302 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 273], 00:13:29.302 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 449], 99.95th=[ 449], 00:13:29.302 | 99.99th=[ 449] 00:13:29.302 bw ( KiB/s): min= 4096, max= 4096, per=31.29%, avg=4096.00, stdev= 0.00, samples=1 00:13:29.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:29.302 lat (usec) : 250=55.72%, 500=40.27%, 750=1.19% 00:13:29.302 lat (msec) : 10=0.15%, 50=2.67% 00:13:29.302 cpu : usr=1.00%, sys=1.50%, ctx=676, majf=0, minf=1 00:13:29.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.302 issued rwts: total=161,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.302 job3: (groupid=0, jobs=1): err= 0: pid=3815172: Mon Jul 15 13:05:50 2024 00:13:29.302 read: IOPS=1473, BW=5894KiB/s (6036kB/s)(5900KiB/1001msec) 00:13:29.302 slat (nsec): min=5805, max=41297, avg=13182.53, stdev=5413.64 00:13:29.302 clat (usec): min=283, max=41186, avg=402.77, stdev=1063.89 00:13:29.302 lat (usec): min=293, max=41204, avg=415.95, stdev=1064.11 00:13:29.302 clat percentiles (usec): 00:13:29.302 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 343], 00:13:29.302 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 383], 00:13:29.302 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 457], 00:13:29.302 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 1450], 99.95th=[41157], 00:13:29.302 | 99.99th=[41157] 00:13:29.302 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:29.302 slat (nsec): min=7897, max=60111, avg=15843.39, stdev=7003.20 00:13:29.302 clat (usec): min=179, max=469, avg=227.38, stdev=32.42 00:13:29.302 lat (usec): min=189, max=518, avg=243.22, stdev=37.64 00:13:29.302 clat percentiles (usec): 00:13:29.302 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:13:29.302 | 30.00th=[ 204], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 231], 00:13:29.302 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 281], 00:13:29.302 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 424], 99.95th=[ 469], 00:13:29.302 | 99.99th=[ 469] 00:13:29.302 bw ( KiB/s): min= 7768, max= 7768, per=59.34%, avg=7768.00, stdev= 0.00, samples=1 00:13:29.302 iops : min= 1942, max= 1942, avg=1942.00, stdev= 0.00, samples=1 00:13:29.302 lat (usec) : 250=41.35%, 500=57.46%, 750=1.13% 00:13:29.302 lat (msec) : 2=0.03%, 50=0.03% 00:13:29.302 cpu : usr=3.90%, sys=5.50%, ctx=3011, majf=0, minf=1 00:13:29.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.302 issued rwts: total=1475,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.302 00:13:29.302 Run status group 0 (all jobs): 00:13:29.302 READ: bw=8655KiB/s (8863kB/s), 281KiB/s-5894KiB/s (287kB/s-6036kB/s), io=8880KiB (9093kB), run=1001-1026msec 00:13:29.302 WRITE: bw=12.8MiB/s (13.4MB/s), 1996KiB/s-6138KiB/s (2044kB/s-6285kB/s), io=13.1MiB (13.8MB), run=1001-1026msec 00:13:29.302 00:13:29.302 Disk stats (read/write): 00:13:29.302 nvme0n1: ios=116/512, merge=0/0, ticks=931/119, in_queue=1050, util=85.67% 00:13:29.302 nvme0n2: ios=435/512, merge=0/0, ticks=1508/140, in_queue=1648, util=89.73% 00:13:29.302 nvme0n3: ios=140/512, merge=0/0, ticks=1120/124, in_queue=1244, util=95.61% 00:13:29.302 nvme0n4: ios=1123/1536, merge=0/0, ticks=522/325, in_queue=847, util=95.89% 00:13:29.302 13:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:29.302 [global] 00:13:29.302 thread=1 00:13:29.302 invalidate=1 00:13:29.302 rw=randwrite 00:13:29.302 time_based=1 00:13:29.302 runtime=1 00:13:29.302 ioengine=libaio 00:13:29.302 direct=1 00:13:29.302 bs=4096 00:13:29.302 iodepth=1 00:13:29.302 norandommap=0 00:13:29.302 numjobs=1 00:13:29.302 00:13:29.302 verify_dump=1 00:13:29.302 verify_backlog=512 00:13:29.302 verify_state_save=0 00:13:29.302 do_verify=1 00:13:29.302 verify=crc32c-intel 00:13:29.302 [job0] 00:13:29.302 filename=/dev/nvme0n1 00:13:29.302 [job1] 00:13:29.302 filename=/dev/nvme0n2 00:13:29.302 [job2] 00:13:29.302 filename=/dev/nvme0n3 00:13:29.302 [job3] 00:13:29.302 filename=/dev/nvme0n4 00:13:29.302 Could not set queue depth (nvme0n1) 00:13:29.302 Could not set queue depth (nvme0n2) 00:13:29.302 Could not set queue depth (nvme0n3) 00:13:29.302 Could not set queue depth (nvme0n4) 00:13:29.302 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.302 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.302 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.302 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.302 fio-3.35 00:13:29.302 Starting 4 threads 00:13:30.676 00:13:30.676 job0: (groupid=0, jobs=1): err= 0: pid=3815515: Mon Jul 15 13:05:52 2024 00:13:30.676 read: IOPS=1633, BW=6533KiB/s (6690kB/s)(6540KiB/1001msec) 00:13:30.676 slat (nsec): min=4220, max=61916, avg=14734.45, stdev=8409.45 00:13:30.676 clat (usec): min=251, max=8053, avg=328.07, stdev=196.68 00:13:30.676 lat (usec): min=258, max=8069, avg=342.81, stdev=196.67 00:13:30.676 clat percentiles (usec): 00:13:30.676 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 285], 00:13:30.676 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:13:30.676 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 383], 95.00th=[ 416], 00:13:30.676 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 8029], 00:13:30.676 | 99.99th=[ 8029] 00:13:30.676 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:30.676 slat (nsec): min=5230, max=40026, avg=10331.54, stdev=5007.54 00:13:30.676 clat (usec): min=161, max=631, avg=197.66, stdev=29.42 00:13:30.676 lat (usec): min=167, max=637, avg=207.99, stdev=30.95 00:13:30.676 clat percentiles (usec): 00:13:30.676 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:13:30.676 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:13:30.676 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 241], 00:13:30.676 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 553], 99.95th=[ 603], 00:13:30.676 | 99.99th=[ 635] 00:13:30.676 bw ( KiB/s): min= 8192, max= 8192, per=45.82%, avg=8192.00, stdev= 0.00, samples=1 00:13:30.676 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:30.676 lat (usec) : 250=53.62%, 500=45.94%, 750=0.41% 00:13:30.676 lat (msec) : 10=0.03% 00:13:30.676 cpu : usr=2.50%, sys=4.80%, ctx=3683, majf=0, minf=1 00:13:30.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.676 issued rwts: total=1635,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.676 job1: (groupid=0, jobs=1): err= 0: pid=3815521: Mon Jul 15 13:05:52 2024 00:13:30.676 read: IOPS=203, BW=814KiB/s (833kB/s)(816KiB/1003msec) 00:13:30.676 slat (nsec): min=5714, max=44012, avg=14860.30, stdev=7046.40 00:13:30.676 clat (usec): min=296, max=42307, avg=4289.98, stdev=11946.24 00:13:30.676 lat (usec): min=302, max=42322, avg=4304.84, stdev=11945.67 00:13:30.676 clat percentiles (usec): 00:13:30.676 | 1.00th=[ 297], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 355], 00:13:30.676 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:13:30.676 | 70.00th=[ 412], 80.00th=[ 445], 90.00th=[ 644], 95.00th=[41157], 00:13:30.676 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:30.677 | 99.99th=[42206] 00:13:30.677 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:13:30.677 slat (nsec): min=6963, max=43890, avg=15554.64, stdev=6056.56 00:13:30.677 clat (usec): min=180, max=374, avg=220.85, stdev=21.59 00:13:30.677 lat (usec): min=194, max=383, avg=236.41, stdev=20.71 00:13:30.677 clat percentiles (usec): 00:13:30.677 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:13:30.677 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:13:30.677 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 255], 00:13:30.677 | 99.00th=[ 281], 99.50th=[ 314], 99.90th=[ 375], 99.95th=[ 375], 00:13:30.677 | 99.99th=[ 375] 00:13:30.677 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=1 00:13:30.677 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:30.677 lat (usec) : 250=65.92%, 500=30.87%, 750=0.42% 00:13:30.677 lat (msec) : 50=2.79% 00:13:30.677 cpu : usr=0.50%, sys=1.10%, ctx=717, majf=0, minf=2 00:13:30.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.677 issued rwts: total=204,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.677 job2: (groupid=0, jobs=1): err= 0: pid=3815523: Mon Jul 15 13:05:52 2024 00:13:30.677 read: IOPS=1203, BW=4815KiB/s (4930kB/s)(4964KiB/1031msec) 00:13:30.677 slat (nsec): min=4799, max=60823, avg=12428.24, stdev=5352.27 00:13:30.677 clat (usec): min=262, max=41340, avg=459.38, stdev=2294.14 00:13:30.677 lat (usec): min=269, max=41353, avg=471.81, stdev=2294.20 00:13:30.677 clat percentiles (usec): 00:13:30.677 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:13:30.677 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 338], 00:13:30.677 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 379], 00:13:30.677 | 99.00th=[ 433], 99.50th=[ 529], 99.90th=[40633], 99.95th=[41157], 00:13:30.677 | 99.99th=[41157] 00:13:30.677 write: IOPS=1489, BW=5959KiB/s (6102kB/s)(6144KiB/1031msec); 0 zone resets 00:13:30.677 slat (nsec): min=6155, max=80221, avg=18076.56, stdev=10243.08 00:13:30.677 clat (usec): min=178, max=3531, avg=263.16, stdev=135.64 00:13:30.677 lat (usec): min=186, max=3541, avg=281.23, stdev=138.45 00:13:30.677 clat percentiles (usec): 00:13:30.677 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:13:30.677 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:13:30.677 | 70.00th=[ 249], 80.00th=[ 277], 90.00th=[ 388], 95.00th=[ 424], 00:13:30.677 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 2999], 99.95th=[ 3523], 00:13:30.677 | 99.99th=[ 3523] 00:13:30.677 bw ( KiB/s): min= 4096, max= 8192, per=34.37%, avg=6144.00, stdev=2896.31, samples=2 00:13:30.677 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:13:30.677 lat (usec) : 250=38.96%, 500=60.53%, 750=0.22%, 1000=0.04% 00:13:30.677 lat (msec) : 2=0.04%, 4=0.07%, 50=0.14% 00:13:30.677 cpu : usr=3.30%, sys=5.34%, ctx=2779, majf=0, minf=1 00:13:30.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.677 issued rwts: total=1241,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.677 job3: (groupid=0, jobs=1): err= 0: pid=3815524: Mon Jul 15 13:05:52 2024 00:13:30.677 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:13:30.677 slat (nsec): min=13277, max=31494, avg=14681.18, stdev=3815.16 00:13:30.677 clat (usec): min=40696, max=42025, avg=41010.65, stdev=237.97 00:13:30.677 lat (usec): min=40712, max=42039, avg=41025.34, stdev=237.63 00:13:30.677 clat percentiles (usec): 00:13:30.677 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:30.677 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:30.677 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:30.677 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:30.677 | 99.99th=[42206] 00:13:30.677 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:13:30.677 slat (nsec): min=6178, max=46066, avg=14645.82, stdev=6381.89 00:13:30.677 clat (usec): min=196, max=369, avg=222.27, stdev=13.77 00:13:30.677 lat (usec): min=204, max=378, avg=236.92, stdev=14.65 00:13:30.677 clat percentiles (usec): 00:13:30.677 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 212], 00:13:30.677 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:13:30.677 | 70.00th=[ 227], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 241], 00:13:30.677 | 99.00th=[ 260], 99.50th=[ 310], 99.90th=[ 371], 99.95th=[ 371], 00:13:30.677 | 99.99th=[ 371] 00:13:30.677 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=1 00:13:30.677 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:30.677 lat (usec) : 250=93.63%, 500=2.25% 00:13:30.677 lat (msec) : 50=4.12% 00:13:30.677 cpu : usr=0.39%, sys=0.68%, ctx=535, majf=0, minf=1 00:13:30.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.677 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.677 00:13:30.677 Run status group 0 (all jobs): 00:13:30.677 READ: bw=11.8MiB/s (12.3MB/s), 85.7KiB/s-6533KiB/s (87.7kB/s-6690kB/s), io=12.1MiB (12.7MB), run=1001-1031msec 00:13:30.677 WRITE: bw=17.5MiB/s (18.3MB/s), 1994KiB/s-8184KiB/s (2042kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1031msec 00:13:30.677 00:13:30.677 Disk stats (read/write): 00:13:30.677 nvme0n1: ios=1530/1536, merge=0/0, ticks=529/304, in_queue=833, util=87.07% 00:13:30.677 nvme0n2: ios=225/512, merge=0/0, ticks=1572/113, in_queue=1685, util=89.43% 00:13:30.677 nvme0n3: ios=1247/1536, merge=0/0, ticks=1198/366, in_queue=1564, util=93.52% 00:13:30.677 nvme0n4: ios=74/512, merge=0/0, ticks=787/103, in_queue=890, util=96.11% 00:13:30.677 13:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:30.677 [global] 00:13:30.677 thread=1 00:13:30.677 invalidate=1 00:13:30.677 rw=write 00:13:30.677 time_based=1 00:13:30.677 runtime=1 00:13:30.677 ioengine=libaio 00:13:30.677 direct=1 00:13:30.677 bs=4096 00:13:30.677 iodepth=128 00:13:30.677 norandommap=0 00:13:30.677 numjobs=1 00:13:30.677 00:13:30.677 verify_dump=1 00:13:30.677 verify_backlog=512 00:13:30.677 verify_state_save=0 00:13:30.677 do_verify=1 00:13:30.677 verify=crc32c-intel 00:13:30.677 [job0] 00:13:30.677 filename=/dev/nvme0n1 00:13:30.677 [job1] 00:13:30.677 filename=/dev/nvme0n2 00:13:30.677 [job2] 00:13:30.677 filename=/dev/nvme0n3 00:13:30.677 [job3] 00:13:30.677 filename=/dev/nvme0n4 00:13:30.677 Could not set queue depth (nvme0n1) 00:13:30.677 Could not set queue depth (nvme0n2) 00:13:30.677 Could not set queue depth (nvme0n3) 00:13:30.677 Could not set queue depth (nvme0n4) 00:13:30.934 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.934 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.934 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.934 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.934 fio-3.35 00:13:30.934 Starting 4 threads 00:13:32.312 00:13:32.312 job0: (groupid=0, jobs=1): err= 0: pid=3815750: Mon Jul 15 13:05:53 2024 00:13:32.312 read: IOPS=3554, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:13:32.312 slat (usec): min=2, max=13927, avg=117.72, stdev=880.41 00:13:32.312 clat (usec): min=1034, max=48434, avg=15459.85, stdev=7416.09 00:13:32.312 lat (usec): min=1042, max=48449, avg=15577.57, stdev=7487.37 00:13:32.312 clat percentiles (usec): 00:13:32.312 | 1.00th=[ 1680], 5.00th=[ 3228], 10.00th=[ 9372], 20.00th=[11207], 00:13:32.312 | 30.00th=[11731], 40.00th=[12518], 50.00th=[14222], 60.00th=[15533], 00:13:32.312 | 70.00th=[17957], 80.00th=[20055], 90.00th=[23200], 95.00th=[25297], 00:13:32.312 | 99.00th=[43779], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:13:32.312 | 99.99th=[48497] 00:13:32.312 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:13:32.312 slat (usec): min=3, max=38298, avg=145.11, stdev=1018.03 00:13:32.312 clat (usec): min=913, max=57133, avg=18818.43, stdev=12461.83 00:13:32.312 lat (usec): min=930, max=57151, avg=18963.55, stdev=12566.43 00:13:32.312 clat percentiles (usec): 00:13:32.312 | 1.00th=[ 1074], 5.00th=[ 7111], 10.00th=[ 7635], 20.00th=[ 9896], 00:13:32.312 | 30.00th=[10421], 40.00th=[10814], 50.00th=[13304], 60.00th=[16057], 00:13:32.312 | 70.00th=[20579], 80.00th=[33817], 90.00th=[38011], 95.00th=[42206], 00:13:32.312 | 99.00th=[51643], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:13:32.312 | 99.99th=[56886] 00:13:32.312 bw ( KiB/s): min=12776, max=15927, per=28.19%, avg=14351.50, stdev=2228.09, samples=2 00:13:32.312 iops : min= 3194, max= 3981, avg=3587.50, stdev=556.49, samples=2 00:13:32.312 lat (usec) : 1000=0.38% 00:13:32.312 lat (msec) : 2=1.08%, 4=3.20%, 10=11.79%, 20=58.00%, 50=24.96% 00:13:32.312 lat (msec) : 100=0.60% 00:13:32.312 cpu : usr=3.98%, sys=6.57%, ctx=288, majf=0, minf=9 00:13:32.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:32.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.312 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.312 job1: (groupid=0, jobs=1): err= 0: pid=3815751: Mon Jul 15 13:05:53 2024 00:13:32.312 read: IOPS=1777, BW=7109KiB/s (7280kB/s)(7152KiB/1006msec) 00:13:32.312 slat (usec): min=3, max=33660, avg=261.57, stdev=1743.40 00:13:32.312 clat (msec): min=4, max=125, avg=28.11, stdev=18.92 00:13:32.312 lat (msec): min=6, max=125, avg=28.37, stdev=19.09 00:13:32.312 clat percentiles (msec): 00:13:32.312 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:13:32.312 | 30.00th=[ 16], 40.00th=[ 22], 50.00th=[ 25], 60.00th=[ 29], 00:13:32.312 | 70.00th=[ 31], 80.00th=[ 39], 90.00th=[ 47], 95.00th=[ 65], 00:13:32.312 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 126], 99.95th=[ 126], 00:13:32.312 | 99.99th=[ 126] 00:13:32.312 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:13:32.312 slat (usec): min=4, max=13735, avg=250.03, stdev=1086.10 00:13:32.312 clat (msec): min=3, max=125, avg=37.74, stdev=28.25 00:13:32.312 lat (msec): min=3, max=125, avg=37.99, stdev=28.44 00:13:32.312 clat percentiles (msec): 00:13:32.312 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 14], 00:13:32.312 | 30.00th=[ 19], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 37], 00:13:32.312 | 70.00th=[ 39], 80.00th=[ 46], 90.00th=[ 81], 95.00th=[ 112], 00:13:32.312 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 126], 00:13:32.312 | 99.99th=[ 126] 00:13:32.312 bw ( KiB/s): min= 6160, max=10244, per=16.11%, avg=8202.00, stdev=2887.82, samples=2 00:13:32.312 iops : min= 1540, max= 2561, avg=2050.50, stdev=721.96, samples=2 00:13:32.312 lat (msec) : 4=0.31%, 10=3.42%, 20=30.97%, 50=52.35%, 100=7.77% 00:13:32.312 lat (msec) : 250=5.19% 00:13:32.312 cpu : usr=3.68%, sys=3.08%, ctx=251, majf=0, minf=13 00:13:32.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:32.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.312 issued rwts: total=1788,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.312 job2: (groupid=0, jobs=1): err= 0: pid=3815752: Mon Jul 15 13:05:53 2024 00:13:32.312 read: IOPS=5478, BW=21.4MiB/s (22.4MB/s)(22.5MiB/1051msec) 00:13:32.312 slat (usec): min=2, max=10364, avg=85.43, stdev=597.88 00:13:32.312 clat (usec): min=4348, max=61115, avg=12493.57, stdev=6873.07 00:13:32.312 lat (usec): min=4356, max=61122, avg=12579.00, stdev=6888.91 00:13:32.312 clat percentiles (usec): 00:13:32.312 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:13:32.312 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11600], 00:13:32.312 | 70.00th=[12256], 80.00th=[14222], 90.00th=[15008], 95.00th=[17171], 00:13:32.312 | 99.00th=[55837], 99.50th=[58983], 99.90th=[60556], 99.95th=[61080], 00:13:32.312 | 99.99th=[61080] 00:13:32.312 write: IOPS=5845, BW=22.8MiB/s (23.9MB/s)(24.0MiB/1051msec); 0 zone resets 00:13:32.312 slat (usec): min=4, max=9720, avg=74.87, stdev=508.83 00:13:32.312 clat (usec): min=1486, max=61126, avg=9973.99, stdev=2897.95 00:13:32.312 lat (usec): min=1499, max=61133, avg=10048.86, stdev=2912.27 00:13:32.312 clat percentiles (usec): 00:13:32.312 | 1.00th=[ 4146], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6915], 00:13:32.312 | 30.00th=[ 7439], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:13:32.312 | 70.00th=[11338], 80.00th=[11863], 90.00th=[13698], 95.00th=[14222], 00:13:32.312 | 99.00th=[16909], 99.50th=[17171], 99.90th=[20055], 99.95th=[20055], 00:13:32.312 | 99.99th=[61080] 00:13:32.312 bw ( KiB/s): min=24560, max=24576, per=48.25%, avg=24568.00, stdev=11.31, samples=2 00:13:32.312 iops : min= 6140, max= 6144, avg=6142.00, stdev= 2.83, samples=2 00:13:32.312 lat (msec) : 2=0.02%, 4=0.33%, 10=33.83%, 20=64.58%, 50=0.19% 00:13:32.312 lat (msec) : 100=1.06% 00:13:32.312 cpu : usr=7.43%, sys=10.38%, ctx=450, majf=0, minf=15 00:13:32.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:32.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.312 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.312 job3: (groupid=0, jobs=1): err= 0: pid=3815753: Mon Jul 15 13:05:53 2024 00:13:32.312 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:13:32.312 slat (usec): min=3, max=27130, avg=302.42, stdev=1971.16 00:13:32.312 clat (usec): min=19316, max=88348, avg=40068.96, stdev=19919.82 00:13:32.312 lat (usec): min=19335, max=88391, avg=40371.38, stdev=20128.80 00:13:32.312 clat percentiles (usec): 00:13:32.312 | 1.00th=[20317], 5.00th=[21365], 10.00th=[22414], 20.00th=[22938], 00:13:32.312 | 30.00th=[23462], 40.00th=[25035], 50.00th=[25560], 60.00th=[37487], 00:13:32.312 | 70.00th=[57410], 80.00th=[62653], 90.00th=[68682], 95.00th=[69731], 00:13:32.312 | 99.00th=[78119], 99.50th=[80217], 99.90th=[88605], 99.95th=[88605], 00:13:32.312 | 99.99th=[88605] 00:13:32.312 write: IOPS=1590, BW=6363KiB/s (6516kB/s)(6408KiB/1007msec); 0 zone resets 00:13:32.312 slat (usec): min=5, max=38177, avg=324.76, stdev=1942.79 00:13:32.312 clat (usec): min=4787, max=86351, avg=40355.21, stdev=18515.75 00:13:32.313 lat (usec): min=9632, max=86379, avg=40679.97, stdev=18635.36 00:13:32.313 clat percentiles (usec): 00:13:32.313 | 1.00th=[17695], 5.00th=[17957], 10.00th=[17957], 20.00th=[20055], 00:13:32.313 | 30.00th=[23987], 40.00th=[29754], 50.00th=[38011], 60.00th=[46400], 00:13:32.313 | 70.00th=[56361], 80.00th=[60031], 90.00th=[66323], 95.00th=[69731], 00:13:32.313 | 99.00th=[77071], 99.50th=[81265], 99.90th=[81265], 99.95th=[86508], 00:13:32.313 | 99.99th=[86508] 00:13:32.313 bw ( KiB/s): min= 4784, max= 7504, per=12.07%, avg=6144.00, stdev=1923.33, samples=2 00:13:32.313 iops : min= 1196, max= 1876, avg=1536.00, stdev=480.83, samples=2 00:13:32.313 lat (msec) : 10=0.29%, 20=10.55%, 50=52.39%, 100=36.78% 00:13:32.313 cpu : usr=2.29%, sys=3.28%, ctx=147, majf=0, minf=13 00:13:32.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:13:32.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.313 issued rwts: total=1536,1602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.313 00:13:32.313 Run status group 0 (all jobs): 00:13:32.313 READ: bw=47.0MiB/s (49.3MB/s), 6101KiB/s-21.4MiB/s (6248kB/s-22.4MB/s), io=49.4MiB (51.8MB), run=1006-1051msec 00:13:32.313 WRITE: bw=49.7MiB/s (52.1MB/s), 6363KiB/s-22.8MiB/s (6516kB/s-23.9MB/s), io=52.3MiB (54.8MB), run=1006-1051msec 00:13:32.313 00:13:32.313 Disk stats (read/write): 00:13:32.313 nvme0n1: ios=3124/3255, merge=0/0, ticks=42120/48970, in_queue=91090, util=98.20% 00:13:32.313 nvme0n2: ios=1586/1687, merge=0/0, ticks=40691/62433, in_queue=103124, util=98.27% 00:13:32.313 nvme0n3: ios=4883/5120, merge=0/0, ticks=54419/48194, in_queue=102613, util=98.43% 00:13:32.313 nvme0n4: ios=1081/1516, merge=0/0, ticks=15692/19533, in_queue=35225, util=98.21% 00:13:32.313 13:05:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:32.313 [global] 00:13:32.313 thread=1 00:13:32.313 invalidate=1 00:13:32.313 rw=randwrite 00:13:32.313 time_based=1 00:13:32.313 runtime=1 00:13:32.313 ioengine=libaio 00:13:32.313 direct=1 00:13:32.313 bs=4096 00:13:32.313 iodepth=128 00:13:32.313 norandommap=0 00:13:32.313 numjobs=1 00:13:32.313 00:13:32.313 verify_dump=1 00:13:32.313 verify_backlog=512 00:13:32.313 verify_state_save=0 00:13:32.313 do_verify=1 00:13:32.313 verify=crc32c-intel 00:13:32.313 [job0] 00:13:32.313 filename=/dev/nvme0n1 00:13:32.313 [job1] 00:13:32.313 filename=/dev/nvme0n2 00:13:32.313 [job2] 00:13:32.313 filename=/dev/nvme0n3 00:13:32.313 [job3] 00:13:32.313 filename=/dev/nvme0n4 00:13:32.313 Could not set queue depth (nvme0n1) 00:13:32.313 Could not set queue depth (nvme0n2) 00:13:32.313 Could not set queue depth (nvme0n3) 00:13:32.313 Could not set queue depth (nvme0n4) 00:13:32.313 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.313 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.313 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.313 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.313 fio-3.35 00:13:32.313 Starting 4 threads 00:13:33.691 00:13:33.691 job0: (groupid=0, jobs=1): err= 0: pid=3815977: Mon Jul 15 13:05:55 2024 00:13:33.691 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:13:33.691 slat (usec): min=3, max=28557, avg=188.05, stdev=1446.55 00:13:33.691 clat (usec): min=9213, max=79448, avg=25237.60, stdev=16094.46 00:13:33.691 lat (usec): min=9221, max=93957, avg=25425.65, stdev=16253.74 00:13:33.691 clat percentiles (usec): 00:13:33.691 | 1.00th=[10421], 5.00th=[11994], 10.00th=[12518], 20.00th=[13960], 00:13:33.691 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15401], 60.00th=[19792], 00:13:33.691 | 70.00th=[32375], 80.00th=[39060], 90.00th=[52691], 95.00th=[55837], 00:13:33.691 | 99.00th=[71828], 99.50th=[72877], 99.90th=[79168], 99.95th=[79168], 00:13:33.691 | 99.99th=[79168] 00:13:33.691 write: IOPS=2420, BW=9682KiB/s (9914kB/s)(9788KiB/1011msec); 0 zone resets 00:13:33.691 slat (usec): min=5, max=25761, avg=235.92, stdev=1284.92 00:13:33.691 clat (msec): min=3, max=120, avg=31.20, stdev=24.60 00:13:33.691 lat (msec): min=3, max=120, avg=31.44, stdev=24.75 00:13:33.691 clat percentiles (msec): 00:13:33.691 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:13:33.691 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 24], 00:13:33.691 | 70.00th=[ 31], 80.00th=[ 44], 90.00th=[ 67], 95.00th=[ 99], 00:13:33.691 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 122], 99.95th=[ 122], 00:13:33.691 | 99.99th=[ 122] 00:13:33.691 bw ( KiB/s): min= 6264, max=12312, per=16.26%, avg=9288.00, stdev=4276.58, samples=2 00:13:33.691 iops : min= 1566, max= 3078, avg=2322.00, stdev=1069.15, samples=2 00:13:33.691 lat (msec) : 4=0.13%, 10=0.96%, 20=45.90%, 50=38.33%, 100=12.32% 00:13:33.691 lat (msec) : 250=2.36% 00:13:33.691 cpu : usr=3.96%, sys=4.95%, ctx=253, majf=0, minf=1 00:13:33.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:33.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.691 issued rwts: total=2048,2447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.691 job1: (groupid=0, jobs=1): err= 0: pid=3815978: Mon Jul 15 13:05:55 2024 00:13:33.691 read: IOPS=4009, BW=15.7MiB/s (16.4MB/s)(16.5MiB/1053msec) 00:13:33.691 slat (usec): min=3, max=15355, avg=112.25, stdev=827.96 00:13:33.691 clat (usec): min=5542, max=65104, avg=15886.97, stdev=9411.84 00:13:33.691 lat (usec): min=5549, max=65112, avg=15999.22, stdev=9452.34 00:13:33.691 clat percentiles (usec): 00:13:33.691 | 1.00th=[ 8160], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10552], 00:13:33.691 | 30.00th=[11338], 40.00th=[11994], 50.00th=[13042], 60.00th=[13960], 00:13:33.691 | 70.00th=[15139], 80.00th=[18744], 90.00th=[25822], 95.00th=[31065], 00:13:33.691 | 99.00th=[61604], 99.50th=[63177], 99.90th=[65274], 99.95th=[65274], 00:13:33.691 | 99.99th=[65274] 00:13:33.691 write: IOPS=4376, BW=17.1MiB/s (17.9MB/s)(18.0MiB/1053msec); 0 zone resets 00:13:33.691 slat (usec): min=4, max=19877, avg=103.73, stdev=723.86 00:13:33.691 clat (usec): min=2625, max=65115, avg=14102.44, stdev=6824.77 00:13:33.691 lat (usec): min=2633, max=65124, avg=14206.16, stdev=6882.89 00:13:33.691 clat percentiles (usec): 00:13:33.691 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[10028], 00:13:33.691 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12911], 00:13:33.691 | 70.00th=[15664], 80.00th=[19006], 90.00th=[20841], 95.00th=[28181], 00:13:33.691 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[45351], 00:13:33.691 | 99.99th=[65274] 00:13:33.691 bw ( KiB/s): min=16384, max=20464, per=32.25%, avg=18424.00, stdev=2885.00, samples=2 00:13:33.691 iops : min= 4096, max= 5116, avg=4606.00, stdev=721.25, samples=2 00:13:33.691 lat (msec) : 4=0.42%, 10=15.32%, 20=68.21%, 50=14.62%, 100=1.43% 00:13:33.691 cpu : usr=6.37%, sys=9.22%, ctx=443, majf=0, minf=1 00:13:33.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:33.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.691 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.691 job2: (groupid=0, jobs=1): err= 0: pid=3815979: Mon Jul 15 13:05:55 2024 00:13:33.691 read: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec) 00:13:33.691 slat (usec): min=3, max=16050, avg=129.74, stdev=900.53 00:13:33.691 clat (usec): min=5598, max=53449, avg=16190.87, stdev=6416.06 00:13:33.691 lat (usec): min=5604, max=53463, avg=16320.61, stdev=6494.38 00:13:33.691 clat percentiles (usec): 00:13:33.692 | 1.00th=[ 8717], 5.00th=[11338], 10.00th=[11600], 20.00th=[12387], 00:13:33.692 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14484], 60.00th=[15139], 00:13:33.692 | 70.00th=[15664], 80.00th=[18220], 90.00th=[21365], 95.00th=[29230], 00:13:33.692 | 99.00th=[46400], 99.50th=[49021], 99.90th=[53216], 99.95th=[53216], 00:13:33.692 | 99.99th=[53216] 00:13:33.692 write: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(15.2MiB/1016msec); 0 zone resets 00:13:33.692 slat (usec): min=4, max=11230, avg=126.20, stdev=734.42 00:13:33.692 clat (usec): min=1420, max=81583, avg=18225.79, stdev=12796.97 00:13:33.692 lat (usec): min=1432, max=81602, avg=18351.99, stdev=12866.83 00:13:33.692 clat percentiles (usec): 00:13:33.692 | 1.00th=[ 3621], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 8586], 00:13:33.692 | 30.00th=[11076], 40.00th=[12518], 50.00th=[14353], 60.00th=[16909], 00:13:33.692 | 70.00th=[21627], 80.00th=[23200], 90.00th=[33424], 95.00th=[38536], 00:13:33.692 | 99.00th=[78119], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:13:33.692 | 99.99th=[81265] 00:13:33.692 bw ( KiB/s): min=13024, max=17040, per=26.32%, avg=15032.00, stdev=2839.74, samples=2 00:13:33.692 iops : min= 3256, max= 4260, avg=3758.00, stdev=709.94, samples=2 00:13:33.692 lat (msec) : 2=0.19%, 4=0.50%, 10=12.24%, 20=63.33%, 50=22.07% 00:13:33.692 lat (msec) : 100=1.67% 00:13:33.692 cpu : usr=5.62%, sys=8.47%, ctx=320, majf=0, minf=1 00:13:33.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:33.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.692 issued rwts: total=3584,3886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.692 job3: (groupid=0, jobs=1): err= 0: pid=3815980: Mon Jul 15 13:05:55 2024 00:13:33.692 read: IOPS=3786, BW=14.8MiB/s (15.5MB/s)(15.0MiB/1011msec) 00:13:33.692 slat (usec): min=2, max=26336, avg=118.25, stdev=996.13 00:13:33.692 clat (usec): min=5785, max=52918, avg=16665.83, stdev=6181.13 00:13:33.692 lat (usec): min=6293, max=53002, avg=16784.08, stdev=6272.90 00:13:33.692 clat percentiles (usec): 00:13:33.692 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11600], 20.00th=[11994], 00:13:33.692 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[16581], 00:13:33.692 | 70.00th=[19006], 80.00th=[21890], 90.00th=[26608], 95.00th=[27657], 00:13:33.692 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[42206], 00:13:33.692 | 99.99th=[52691] 00:13:33.692 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:13:33.692 slat (usec): min=3, max=21189, avg=100.37, stdev=758.11 00:13:33.692 clat (usec): min=635, max=90463, avg=15606.07, stdev=12933.55 00:13:33.692 lat (usec): min=653, max=90470, avg=15706.44, stdev=12976.63 00:13:33.692 clat percentiles (usec): 00:13:33.692 | 1.00th=[ 1237], 5.00th=[ 3228], 10.00th=[ 5932], 20.00th=[ 7701], 00:13:33.692 | 30.00th=[10290], 40.00th=[12256], 50.00th=[12780], 60.00th=[14353], 00:13:33.692 | 70.00th=[16188], 80.00th=[20055], 90.00th=[23987], 95.00th=[42730], 00:13:33.692 | 99.00th=[82314], 99.50th=[85459], 99.90th=[90702], 99.95th=[90702], 00:13:33.692 | 99.99th=[90702] 00:13:33.692 bw ( KiB/s): min=13520, max=19248, per=28.68%, avg=16384.00, stdev=4050.31, samples=2 00:13:33.692 iops : min= 3380, max= 4812, avg=4096.00, stdev=1012.58, samples=2 00:13:33.692 lat (usec) : 750=0.03%, 1000=0.29% 00:13:33.692 lat (msec) : 2=0.95%, 4=2.03%, 10=13.24%, 20=60.51%, 50=20.67% 00:13:33.692 lat (msec) : 100=2.28% 00:13:33.692 cpu : usr=3.96%, sys=4.75%, ctx=342, majf=0, minf=1 00:13:33.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:33.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.692 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.692 00:13:33.692 Run status group 0 (all jobs): 00:13:33.692 READ: bw=50.8MiB/s (53.2MB/s), 8103KiB/s-15.7MiB/s (8297kB/s-16.4MB/s), io=53.4MiB (56.0MB), run=1011-1053msec 00:13:33.692 WRITE: bw=55.8MiB/s (58.5MB/s), 9682KiB/s-17.1MiB/s (9914kB/s-17.9MB/s), io=58.7MiB (61.6MB), run=1011-1053msec 00:13:33.692 00:13:33.692 Disk stats (read/write): 00:13:33.692 nvme0n1: ios=2098/2063, merge=0/0, ticks=28745/34753, in_queue=63498, util=97.70% 00:13:33.692 nvme0n2: ios=3395/3584, merge=0/0, ticks=46365/43297, in_queue=89662, util=97.87% 00:13:33.692 nvme0n3: ios=2952/3072, merge=0/0, ticks=46938/56071, in_queue=103009, util=89.03% 00:13:33.692 nvme0n4: ios=3120/3232, merge=0/0, ticks=45005/46356, in_queue=91361, util=97.68% 00:13:33.692 13:05:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:33.692 13:05:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3816123 00:13:33.692 13:05:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:33.692 13:05:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:33.692 [global] 00:13:33.692 thread=1 00:13:33.692 invalidate=1 00:13:33.692 rw=read 00:13:33.692 time_based=1 00:13:33.692 runtime=10 00:13:33.692 ioengine=libaio 00:13:33.692 direct=1 00:13:33.692 bs=4096 00:13:33.692 iodepth=1 00:13:33.692 norandommap=1 00:13:33.692 numjobs=1 00:13:33.692 00:13:33.692 [job0] 00:13:33.692 filename=/dev/nvme0n1 00:13:33.692 [job1] 00:13:33.692 filename=/dev/nvme0n2 00:13:33.692 [job2] 00:13:33.692 filename=/dev/nvme0n3 00:13:33.692 [job3] 00:13:33.692 filename=/dev/nvme0n4 00:13:33.692 Could not set queue depth (nvme0n1) 00:13:33.692 Could not set queue depth (nvme0n2) 00:13:33.692 Could not set queue depth (nvme0n3) 00:13:33.692 Could not set queue depth (nvme0n4) 00:13:33.692 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.692 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.692 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.692 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.692 fio-3.35 00:13:33.692 Starting 4 threads 00:13:36.981 13:05:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:36.981 13:05:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:36.981 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=14295040, buflen=4096 00:13:36.981 fio: pid=3816338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:36.981 13:05:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:36.981 13:05:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:37.240 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=692224, buflen=4096 00:13:37.240 fio: pid=3816337, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:37.498 13:05:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:37.498 13:05:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:37.498 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=3448832, buflen=4096 00:13:37.498 fio: pid=3816312, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:37.756 13:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:37.756 13:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:37.756 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=561152, buflen=4096 00:13:37.756 fio: pid=3816333, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:37.756 00:13:37.756 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3816312: Mon Jul 15 13:05:59 2024 00:13:37.756 read: IOPS=243, BW=974KiB/s (998kB/s)(3368KiB/3457msec) 00:13:37.756 slat (usec): min=4, max=34285, avg=125.52, stdev=1694.28 00:13:37.757 clat (usec): min=246, max=41414, avg=3977.01, stdev=11661.32 00:13:37.757 lat (usec): min=251, max=41446, avg=4094.26, stdev=11752.95 00:13:37.757 clat percentiles (usec): 00:13:37.757 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:13:37.757 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 306], 00:13:37.757 | 70.00th=[ 322], 80.00th=[ 379], 90.00th=[ 498], 95.00th=[41157], 00:13:37.757 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:37.757 | 99.99th=[41157] 00:13:37.757 bw ( KiB/s): min= 96, max= 216, per=2.42%, avg=121.33, stdev=48.11, samples=6 00:13:37.757 iops : min= 24, max= 54, avg=30.33, stdev=12.03, samples=6 00:13:37.757 lat (usec) : 250=0.47%, 500=89.56%, 750=0.83% 00:13:37.757 lat (msec) : 50=9.02% 00:13:37.757 cpu : usr=0.06%, sys=0.52%, ctx=849, majf=0, minf=1 00:13:37.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 issued rwts: total=843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.757 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3816333: Mon Jul 15 13:05:59 2024 00:13:37.757 read: IOPS=37, BW=147KiB/s (151kB/s)(548KiB/3718msec) 00:13:37.757 slat (usec): min=6, max=15937, avg=243.28, stdev=1685.75 00:13:37.757 clat (usec): min=260, max=41307, avg=26725.29, stdev=19459.33 00:13:37.757 lat (usec): min=267, max=45017, avg=26970.11, stdev=19286.19 00:13:37.757 clat percentiles (usec): 00:13:37.757 | 1.00th=[ 262], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 293], 00:13:37.757 | 30.00th=[ 404], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:13:37.757 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:37.757 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:37.757 | 99.99th=[41157] 00:13:37.757 bw ( KiB/s): min= 96, max= 375, per=2.85%, avg=142.71, stdev=102.74, samples=7 00:13:37.757 iops : min= 24, max= 93, avg=35.57, stdev=25.40, samples=7 00:13:37.757 lat (usec) : 500=32.61%, 750=2.17% 00:13:37.757 lat (msec) : 50=64.49% 00:13:37.757 cpu : usr=0.11%, sys=0.00%, ctx=142, majf=0, minf=1 00:13:37.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.757 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3816337: Mon Jul 15 13:05:59 2024 00:13:37.757 read: IOPS=53, BW=214KiB/s (219kB/s)(676KiB/3163msec) 00:13:37.757 slat (usec): min=6, max=8381, avg=66.78, stdev=641.56 00:13:37.757 clat (usec): min=317, max=48073, avg=18504.03, stdev=20301.83 00:13:37.757 lat (usec): min=344, max=48084, avg=18571.11, stdev=20271.96 00:13:37.757 clat percentiles (usec): 00:13:37.757 | 1.00th=[ 343], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 363], 00:13:37.757 | 30.00th=[ 408], 40.00th=[ 486], 50.00th=[ 562], 60.00th=[41157], 00:13:37.757 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:37.757 | 99.00th=[41681], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:13:37.757 | 99.99th=[47973] 00:13:37.757 bw ( KiB/s): min= 104, max= 544, per=4.27%, avg=213.33, stdev=166.48, samples=6 00:13:37.757 iops : min= 26, max= 136, avg=53.33, stdev=41.62, samples=6 00:13:37.757 lat (usec) : 500=40.00%, 750=15.29% 00:13:37.757 lat (msec) : 50=44.12% 00:13:37.757 cpu : usr=0.19%, sys=0.00%, ctx=172, majf=0, minf=1 00:13:37.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.757 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3816338: Mon Jul 15 13:05:59 2024 00:13:37.757 read: IOPS=1201, BW=4804KiB/s (4919kB/s)(13.6MiB/2906msec) 00:13:37.757 slat (nsec): min=4894, max=72788, avg=23749.85, stdev=12555.47 00:13:37.757 clat (usec): min=325, max=41114, avg=796.82, stdev=3865.19 00:13:37.757 lat (usec): min=330, max=41121, avg=820.56, stdev=3865.04 00:13:37.757 clat percentiles (usec): 00:13:37.757 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 388], 00:13:37.757 | 30.00th=[ 400], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 433], 00:13:37.757 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 545], 00:13:37.757 | 99.00th=[ 709], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:37.757 | 99.99th=[41157] 00:13:37.757 bw ( KiB/s): min= 472, max= 9056, per=100.00%, avg=5568.00, stdev=4318.24, samples=5 00:13:37.757 iops : min= 118, max= 2264, avg=1392.00, stdev=1079.56, samples=5 00:13:37.757 lat (usec) : 500=92.49%, 750=6.53% 00:13:37.757 lat (msec) : 2=0.03%, 50=0.92% 00:13:37.757 cpu : usr=1.00%, sys=3.30%, ctx=3493, majf=0, minf=1 00:13:37.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.757 issued rwts: total=3491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.757 00:13:37.757 Run status group 0 (all jobs): 00:13:37.757 READ: bw=4990KiB/s (5110kB/s), 147KiB/s-4804KiB/s (151kB/s-4919kB/s), io=18.1MiB (19.0MB), run=2906-3718msec 00:13:37.757 00:13:37.757 Disk stats (read/write): 00:13:37.757 nvme0n1: ios=641/0, merge=0/0, ticks=3248/0, in_queue=3248, util=93.36% 00:13:37.757 nvme0n2: ios=134/0, merge=0/0, ticks=3541/0, in_queue=3541, util=95.74% 00:13:37.757 nvme0n3: ios=168/0, merge=0/0, ticks=3089/0, in_queue=3089, util=96.54% 00:13:37.757 nvme0n4: ios=3536/0, merge=0/0, ticks=3607/0, in_queue=3607, util=98.85% 00:13:38.015 13:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.015 13:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:38.273 13:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.273 13:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:38.531 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.531 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:38.789 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.789 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3816123 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:39.048 nvmf hotplug test: fio failed as expected 00:13:39.048 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.306 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.306 rmmod nvme_tcp 00:13:39.306 rmmod nvme_fabrics 00:13:39.307 rmmod nvme_keyring 00:13:39.307 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.307 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:39.307 13:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:39.307 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3814216 ']' 00:13:39.307 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3814216 00:13:39.307 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3814216 ']' 00:13:39.307 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3814216 00:13:39.307 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:39.307 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.307 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3814216 00:13:39.564 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:39.565 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:39.565 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3814216' 00:13:39.565 killing process with pid 3814216 00:13:39.565 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3814216 00:13:39.565 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3814216 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.823 13:06:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.729 13:06:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.729 00:13:41.729 real 0m23.313s 00:13:41.729 user 1m21.032s 00:13:41.729 sys 0m6.342s 00:13:41.729 13:06:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.729 13:06:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.729 ************************************ 00:13:41.729 END TEST nvmf_fio_target 00:13:41.729 ************************************ 00:13:41.729 13:06:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.729 13:06:03 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:41.729 13:06:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.729 13:06:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.729 13:06:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.729 ************************************ 00:13:41.729 START TEST nvmf_bdevio 00:13:41.729 ************************************ 00:13:41.729 13:06:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:41.988 * Looking for test storage... 00:13:41.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.988 13:06:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.989 13:06:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.893 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:13:43.894 00:13:43.894 --- 10.0.0.2 ping statistics --- 00:13:43.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.894 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:13:43.894 00:13:43.894 --- 10.0.0.1 ping statistics --- 00:13:43.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.894 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.894 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3819062 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3819062 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3819062 ']' 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.153 13:06:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:44.153 [2024-07-15 13:06:05.653798] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:13:44.153 [2024-07-15 13:06:05.653866] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.153 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.153 [2024-07-15 13:06:05.720791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.153 [2024-07-15 13:06:05.835038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.153 [2024-07-15 13:06:05.835092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.153 [2024-07-15 13:06:05.835116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.153 [2024-07-15 13:06:05.835127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.153 [2024-07-15 13:06:05.835137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.153 [2024-07-15 13:06:05.835228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:44.153 [2024-07-15 13:06:05.835282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:44.153 [2024-07-15 13:06:05.835306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:44.153 [2024-07-15 13:06:05.835309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.088 [2024-07-15 13:06:06.664063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.088 Malloc0 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.088 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.089 [2024-07-15 13:06:06.716567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:45.089 { 00:13:45.089 "params": { 00:13:45.089 "name": "Nvme$subsystem", 00:13:45.089 "trtype": "$TEST_TRANSPORT", 00:13:45.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.089 "adrfam": "ipv4", 00:13:45.089 "trsvcid": "$NVMF_PORT", 00:13:45.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.089 "hdgst": ${hdgst:-false}, 00:13:45.089 "ddgst": ${ddgst:-false} 00:13:45.089 }, 00:13:45.089 "method": "bdev_nvme_attach_controller" 00:13:45.089 } 00:13:45.089 EOF 00:13:45.089 )") 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:45.089 13:06:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:45.089 "params": { 00:13:45.089 "name": "Nvme1", 00:13:45.089 "trtype": "tcp", 00:13:45.089 "traddr": "10.0.0.2", 00:13:45.089 "adrfam": "ipv4", 00:13:45.089 "trsvcid": "4420", 00:13:45.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.089 "hdgst": false, 00:13:45.089 "ddgst": false 00:13:45.089 }, 00:13:45.089 "method": "bdev_nvme_attach_controller" 00:13:45.089 }' 00:13:45.089 [2024-07-15 13:06:06.765093] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:13:45.089 [2024-07-15 13:06:06.765171] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819213 ] 00:13:45.346 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.346 [2024-07-15 13:06:06.825648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.346 [2024-07-15 13:06:06.940556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.346 [2024-07-15 13:06:06.940581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.346 [2024-07-15 13:06:06.940586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.603 I/O targets: 00:13:45.603 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:45.603 00:13:45.603 00:13:45.603 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.603 http://cunit.sourceforge.net/ 00:13:45.603 00:13:45.603 00:13:45.603 Suite: bdevio tests on: Nvme1n1 00:13:45.603 Test: blockdev write read block ...passed 00:13:45.603 Test: blockdev write zeroes read block ...passed 00:13:45.603 Test: blockdev write zeroes read no split ...passed 00:13:45.860 Test: blockdev write zeroes read split ...passed 00:13:45.860 Test: blockdev write zeroes read split partial ...passed 00:13:45.860 Test: blockdev reset ...[2024-07-15 13:06:07.367763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:45.860 [2024-07-15 13:06:07.367873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144d580 (9): Bad file descriptor 00:13:45.860 [2024-07-15 13:06:07.478932] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:45.860 passed 00:13:45.860 Test: blockdev write read 8 blocks ...passed 00:13:45.860 Test: blockdev write read size > 128k ...passed 00:13:45.860 Test: blockdev write read invalid size ...passed 00:13:45.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:45.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:45.860 Test: blockdev write read max offset ...passed 00:13:46.118 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.118 Test: blockdev writev readv 8 blocks ...passed 00:13:46.118 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.118 Test: blockdev writev readv block ...passed 00:13:46.118 Test: blockdev writev readv size > 128k ...passed 00:13:46.118 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.118 Test: blockdev comparev and writev ...[2024-07-15 13:06:07.696057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.696093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.696118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.696538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.696564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.696586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.696604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.697001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.697025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.697064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.697467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.697491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.697512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.118 [2024-07-15 13:06:07.697528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:46.118 passed 00:13:46.118 Test: blockdev nvme passthru rw ...passed 00:13:46.118 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:06:07.781245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.118 [2024-07-15 13:06:07.781273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.781457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.118 [2024-07-15 13:06:07.781479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.781653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.118 [2024-07-15 13:06:07.781676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:46.118 [2024-07-15 13:06:07.781859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.118 [2024-07-15 13:06:07.781890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:46.118 passed 00:13:46.118 Test: blockdev nvme admin passthru ...passed 00:13:46.377 Test: blockdev copy ...passed 00:13:46.377 00:13:46.377 Run Summary: Type Total Ran Passed Failed Inactive 00:13:46.377 suites 1 1 n/a 0 0 00:13:46.377 tests 23 23 23 0 0 00:13:46.377 asserts 152 152 152 0 n/a 00:13:46.377 00:13:46.377 Elapsed time = 1.361 seconds 00:13:46.377 13:06:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.377 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.377 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.635 rmmod nvme_tcp 00:13:46.635 rmmod nvme_fabrics 00:13:46.635 rmmod nvme_keyring 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3819062 ']' 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3819062 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3819062 ']' 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3819062 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3819062 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3819062' 00:13:46.635 killing process with pid 3819062 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3819062 00:13:46.635 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3819062 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.893 13:06:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.797 13:06:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.056 00:13:49.056 real 0m7.119s 00:13:49.056 user 0m13.639s 00:13:49.056 sys 0m2.125s 00:13:49.056 13:06:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.056 13:06:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:49.056 ************************************ 00:13:49.056 END TEST nvmf_bdevio 00:13:49.056 ************************************ 00:13:49.056 13:06:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.056 13:06:10 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:49.056 13:06:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.056 13:06:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.056 13:06:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.056 ************************************ 00:13:49.056 START TEST nvmf_auth_target 00:13:49.056 ************************************ 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:49.056 * Looking for test storage... 00:13:49.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.056 13:06:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.057 13:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.963 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:50.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:50.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:50.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:50.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.964 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:13:51.223 00:13:51.223 --- 10.0.0.2 ping statistics --- 00:13:51.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.223 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:13:51.223 00:13:51.223 --- 10.0.0.1 ping statistics --- 00:13:51.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.223 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3821789 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3821789 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3821789 ']' 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.223 13:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3821816 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c50afc952d9de4f52ff1565c54a61899e6943b52b4b7a664 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.n0x 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c50afc952d9de4f52ff1565c54a61899e6943b52b4b7a664 0 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c50afc952d9de4f52ff1565c54a61899e6943b52b4b7a664 0 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c50afc952d9de4f52ff1565c54a61899e6943b52b4b7a664 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.n0x 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.n0x 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.n0x 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2eed849c896240fedecb8ea6687aa943c837cde5577c3af36f53c010c18da343 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tts 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2eed849c896240fedecb8ea6687aa943c837cde5577c3af36f53c010c18da343 3 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2eed849c896240fedecb8ea6687aa943c837cde5577c3af36f53c010c18da343 3 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2eed849c896240fedecb8ea6687aa943c837cde5577c3af36f53c010c18da343 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tts 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tts 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.tts 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9044692e25ecab9d1fea01f2dc159ba9 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jLx 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9044692e25ecab9d1fea01f2dc159ba9 1 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9044692e25ecab9d1fea01f2dc159ba9 1 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9044692e25ecab9d1fea01f2dc159ba9 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:51.481 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jLx 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jLx 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.jLx 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be5c08e383ca13c31ffcd8785137a97226afda1270e0fb0b 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lci 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be5c08e383ca13c31ffcd8785137a97226afda1270e0fb0b 2 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be5c08e383ca13c31ffcd8785137a97226afda1270e0fb0b 2 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be5c08e383ca13c31ffcd8785137a97226afda1270e0fb0b 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lci 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lci 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.lci 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2d1cc622ec1d6f09af4f362a43dbedab99f58e5020b4ee06 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LLA 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2d1cc622ec1d6f09af4f362a43dbedab99f58e5020b4ee06 2 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2d1cc622ec1d6f09af4f362a43dbedab99f58e5020b4ee06 2 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2d1cc622ec1d6f09af4f362a43dbedab99f58e5020b4ee06 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LLA 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LLA 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.LLA 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7b4b6aea3b985c1df791d6bb3b43d0c3 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.WxU 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7b4b6aea3b985c1df791d6bb3b43d0c3 1 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7b4b6aea3b985c1df791d6bb3b43d0c3 1 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7b4b6aea3b985c1df791d6bb3b43d0c3 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.WxU 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.WxU 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.WxU 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=83313e59a9237ef173a6a66ae2abffc1c006a2451cd963c064079012b4aaafb6 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.50P 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 83313e59a9237ef173a6a66ae2abffc1c006a2451cd963c064079012b4aaafb6 3 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 83313e59a9237ef173a6a66ae2abffc1c006a2451cd963c064079012b4aaafb6 3 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=83313e59a9237ef173a6a66ae2abffc1c006a2451cd963c064079012b4aaafb6 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.50P 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.50P 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.50P 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3821789 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3821789 ']' 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.738 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3821816 /var/tmp/host.sock 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3821816 ']' 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.995 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n0x 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.n0x 00:13:52.253 13:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.n0x 00:13:52.512 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.tts ]] 00:13:52.512 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tts 00:13:52.512 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.512 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.512 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.512 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tts 00:13:52.512 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tts 00:13:52.771 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:52.771 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.jLx 00:13:52.771 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.771 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.771 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.771 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.jLx 00:13:52.771 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.jLx 00:13:53.029 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.lci ]] 00:13:53.029 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lci 00:13:53.029 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.029 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.029 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.029 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lci 00:13:53.029 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lci 00:13:53.287 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.287 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LLA 00:13:53.287 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.287 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.287 13:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.287 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.LLA 00:13:53.287 13:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.LLA 00:13:53.545 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.WxU ]] 00:13:53.545 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WxU 00:13:53.545 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.545 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.545 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.545 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WxU 00:13:53.545 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WxU 00:13:53.803 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.803 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.50P 00:13:53.803 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.803 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.803 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.803 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.50P 00:13:53.803 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.50P 00:13:54.061 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:54.061 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:54.061 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.061 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.061 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.061 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.319 13:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.577 00:13:54.838 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.838 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.838 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.096 { 00:13:55.096 "cntlid": 1, 00:13:55.096 "qid": 0, 00:13:55.096 "state": "enabled", 00:13:55.096 "thread": "nvmf_tgt_poll_group_000", 00:13:55.096 "listen_address": { 00:13:55.096 "trtype": "TCP", 00:13:55.096 "adrfam": "IPv4", 00:13:55.096 "traddr": "10.0.0.2", 00:13:55.096 "trsvcid": "4420" 00:13:55.096 }, 00:13:55.096 "peer_address": { 00:13:55.096 "trtype": "TCP", 00:13:55.096 "adrfam": "IPv4", 00:13:55.096 "traddr": "10.0.0.1", 00:13:55.096 "trsvcid": "60570" 00:13:55.096 }, 00:13:55.096 "auth": { 00:13:55.096 "state": "completed", 00:13:55.096 "digest": "sha256", 00:13:55.096 "dhgroup": "null" 00:13:55.096 } 00:13:55.096 } 00:13:55.096 ]' 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.096 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.354 13:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.292 13:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.550 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:56.550 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.550 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.550 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:56.550 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:56.551 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.551 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.551 13:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.551 13:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.551 13:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.551 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.551 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.818 00:13:56.818 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.818 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.818 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.076 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.076 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.076 13:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.076 13:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.076 13:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.076 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.076 { 00:13:57.076 "cntlid": 3, 00:13:57.076 "qid": 0, 00:13:57.076 "state": "enabled", 00:13:57.076 "thread": "nvmf_tgt_poll_group_000", 00:13:57.076 "listen_address": { 00:13:57.076 "trtype": "TCP", 00:13:57.076 "adrfam": "IPv4", 00:13:57.076 "traddr": "10.0.0.2", 00:13:57.076 "trsvcid": "4420" 00:13:57.076 }, 00:13:57.076 "peer_address": { 00:13:57.076 "trtype": "TCP", 00:13:57.076 "adrfam": "IPv4", 00:13:57.076 "traddr": "10.0.0.1", 00:13:57.076 "trsvcid": "60596" 00:13:57.076 }, 00:13:57.076 "auth": { 00:13:57.076 "state": "completed", 00:13:57.076 "digest": "sha256", 00:13:57.076 "dhgroup": "null" 00:13:57.076 } 00:13:57.076 } 00:13:57.076 ]' 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.334 13:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.592 13:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.528 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.785 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.354 00:13:59.354 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.354 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.354 13:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.354 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.354 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.354 13:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.354 13:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.354 13:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.354 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.354 { 00:13:59.354 "cntlid": 5, 00:13:59.354 "qid": 0, 00:13:59.354 "state": "enabled", 00:13:59.354 "thread": "nvmf_tgt_poll_group_000", 00:13:59.354 "listen_address": { 00:13:59.354 "trtype": "TCP", 00:13:59.354 "adrfam": "IPv4", 00:13:59.354 "traddr": "10.0.0.2", 00:13:59.354 "trsvcid": "4420" 00:13:59.354 }, 00:13:59.354 "peer_address": { 00:13:59.354 "trtype": "TCP", 00:13:59.354 "adrfam": "IPv4", 00:13:59.354 "traddr": "10.0.0.1", 00:13:59.354 "trsvcid": "60636" 00:13:59.354 }, 00:13:59.354 "auth": { 00:13:59.354 "state": "completed", 00:13:59.354 "digest": "sha256", 00:13:59.354 "dhgroup": "null" 00:13:59.354 } 00:13:59.354 } 00:13:59.354 ]' 00:13:59.354 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.613 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.613 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.613 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:59.613 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.613 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.613 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.613 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.871 13:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.807 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.065 13:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.636 00:14:01.636 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.636 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.636 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.894 { 00:14:01.894 "cntlid": 7, 00:14:01.894 "qid": 0, 00:14:01.894 "state": "enabled", 00:14:01.894 "thread": "nvmf_tgt_poll_group_000", 00:14:01.894 "listen_address": { 00:14:01.894 "trtype": "TCP", 00:14:01.894 "adrfam": "IPv4", 00:14:01.894 "traddr": "10.0.0.2", 00:14:01.894 "trsvcid": "4420" 00:14:01.894 }, 00:14:01.894 "peer_address": { 00:14:01.894 "trtype": "TCP", 00:14:01.894 "adrfam": "IPv4", 00:14:01.894 "traddr": "10.0.0.1", 00:14:01.894 "trsvcid": "35882" 00:14:01.894 }, 00:14:01.894 "auth": { 00:14:01.894 "state": "completed", 00:14:01.894 "digest": "sha256", 00:14:01.894 "dhgroup": "null" 00:14:01.894 } 00:14:01.894 } 00:14:01.894 ]' 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.894 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.152 13:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.087 13:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.345 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.910 00:14:03.910 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.910 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.910 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.168 { 00:14:04.168 "cntlid": 9, 00:14:04.168 "qid": 0, 00:14:04.168 "state": "enabled", 00:14:04.168 "thread": "nvmf_tgt_poll_group_000", 00:14:04.168 "listen_address": { 00:14:04.168 "trtype": "TCP", 00:14:04.168 "adrfam": "IPv4", 00:14:04.168 "traddr": "10.0.0.2", 00:14:04.168 "trsvcid": "4420" 00:14:04.168 }, 00:14:04.168 "peer_address": { 00:14:04.168 "trtype": "TCP", 00:14:04.168 "adrfam": "IPv4", 00:14:04.168 "traddr": "10.0.0.1", 00:14:04.168 "trsvcid": "35902" 00:14:04.168 }, 00:14:04.168 "auth": { 00:14:04.168 "state": "completed", 00:14:04.168 "digest": "sha256", 00:14:04.168 "dhgroup": "ffdhe2048" 00:14:04.168 } 00:14:04.168 } 00:14:04.168 ]' 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.168 13:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.427 13:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.362 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.620 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.188 00:14:06.188 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.188 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.188 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.188 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.188 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.188 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.188 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.447 13:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.447 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.447 { 00:14:06.447 "cntlid": 11, 00:14:06.447 "qid": 0, 00:14:06.447 "state": "enabled", 00:14:06.447 "thread": "nvmf_tgt_poll_group_000", 00:14:06.447 "listen_address": { 00:14:06.447 "trtype": "TCP", 00:14:06.447 "adrfam": "IPv4", 00:14:06.447 "traddr": "10.0.0.2", 00:14:06.447 "trsvcid": "4420" 00:14:06.447 }, 00:14:06.447 "peer_address": { 00:14:06.447 "trtype": "TCP", 00:14:06.447 "adrfam": "IPv4", 00:14:06.447 "traddr": "10.0.0.1", 00:14:06.447 "trsvcid": "35930" 00:14:06.447 }, 00:14:06.447 "auth": { 00:14:06.447 "state": "completed", 00:14:06.447 "digest": "sha256", 00:14:06.447 "dhgroup": "ffdhe2048" 00:14:06.447 } 00:14:06.447 } 00:14:06.447 ]' 00:14:06.447 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.447 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.447 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.447 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.447 13:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.447 13:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.447 13:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.447 13:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.704 13:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.637 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.894 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.463 00:14:08.463 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.463 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.463 13:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.721 { 00:14:08.721 "cntlid": 13, 00:14:08.721 "qid": 0, 00:14:08.721 "state": "enabled", 00:14:08.721 "thread": "nvmf_tgt_poll_group_000", 00:14:08.721 "listen_address": { 00:14:08.721 "trtype": "TCP", 00:14:08.721 "adrfam": "IPv4", 00:14:08.721 "traddr": "10.0.0.2", 00:14:08.721 "trsvcid": "4420" 00:14:08.721 }, 00:14:08.721 "peer_address": { 00:14:08.721 "trtype": "TCP", 00:14:08.721 "adrfam": "IPv4", 00:14:08.721 "traddr": "10.0.0.1", 00:14:08.721 "trsvcid": "35960" 00:14:08.721 }, 00:14:08.721 "auth": { 00:14:08.721 "state": "completed", 00:14:08.721 "digest": "sha256", 00:14:08.721 "dhgroup": "ffdhe2048" 00:14:08.721 } 00:14:08.721 } 00:14:08.721 ]' 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.721 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.979 13:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.915 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:10.173 13:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:10.742 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.742 { 00:14:10.742 "cntlid": 15, 00:14:10.742 "qid": 0, 00:14:10.742 "state": "enabled", 00:14:10.742 "thread": "nvmf_tgt_poll_group_000", 00:14:10.742 "listen_address": { 00:14:10.742 "trtype": "TCP", 00:14:10.742 "adrfam": "IPv4", 00:14:10.742 "traddr": "10.0.0.2", 00:14:10.742 "trsvcid": "4420" 00:14:10.742 }, 00:14:10.742 "peer_address": { 00:14:10.742 "trtype": "TCP", 00:14:10.742 "adrfam": "IPv4", 00:14:10.742 "traddr": "10.0.0.1", 00:14:10.742 "trsvcid": "40022" 00:14:10.742 }, 00:14:10.742 "auth": { 00:14:10.742 "state": "completed", 00:14:10.742 "digest": "sha256", 00:14:10.742 "dhgroup": "ffdhe2048" 00:14:10.742 } 00:14:10.742 } 00:14:10.742 ]' 00:14:10.742 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.001 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.001 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.001 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.001 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.001 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.001 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.001 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.259 13:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:12.190 13:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.447 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.013 00:14:13.013 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.013 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.013 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.271 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.271 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.271 13:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.271 13:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.271 13:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.271 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.271 { 00:14:13.271 "cntlid": 17, 00:14:13.271 "qid": 0, 00:14:13.271 "state": "enabled", 00:14:13.271 "thread": "nvmf_tgt_poll_group_000", 00:14:13.271 "listen_address": { 00:14:13.271 "trtype": "TCP", 00:14:13.271 "adrfam": "IPv4", 00:14:13.271 "traddr": "10.0.0.2", 00:14:13.271 "trsvcid": "4420" 00:14:13.271 }, 00:14:13.271 "peer_address": { 00:14:13.271 "trtype": "TCP", 00:14:13.271 "adrfam": "IPv4", 00:14:13.271 "traddr": "10.0.0.1", 00:14:13.272 "trsvcid": "40044" 00:14:13.272 }, 00:14:13.272 "auth": { 00:14:13.272 "state": "completed", 00:14:13.272 "digest": "sha256", 00:14:13.272 "dhgroup": "ffdhe3072" 00:14:13.272 } 00:14:13.272 } 00:14:13.272 ]' 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.272 13:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.530 13:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:14:14.466 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.467 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.467 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.467 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.467 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.467 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.467 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.467 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.725 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.294 00:14:15.294 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.294 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.294 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.294 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.294 13:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.294 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.294 13:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.552 { 00:14:15.552 "cntlid": 19, 00:14:15.552 "qid": 0, 00:14:15.552 "state": "enabled", 00:14:15.552 "thread": "nvmf_tgt_poll_group_000", 00:14:15.552 "listen_address": { 00:14:15.552 "trtype": "TCP", 00:14:15.552 "adrfam": "IPv4", 00:14:15.552 "traddr": "10.0.0.2", 00:14:15.552 "trsvcid": "4420" 00:14:15.552 }, 00:14:15.552 "peer_address": { 00:14:15.552 "trtype": "TCP", 00:14:15.552 "adrfam": "IPv4", 00:14:15.552 "traddr": "10.0.0.1", 00:14:15.552 "trsvcid": "40066" 00:14:15.552 }, 00:14:15.552 "auth": { 00:14:15.552 "state": "completed", 00:14:15.552 "digest": "sha256", 00:14:15.552 "dhgroup": "ffdhe3072" 00:14:15.552 } 00:14:15.552 } 00:14:15.552 ]' 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.552 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.810 13:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.745 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.004 13:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.571 00:14:17.571 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.571 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.571 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.830 { 00:14:17.830 "cntlid": 21, 00:14:17.830 "qid": 0, 00:14:17.830 "state": "enabled", 00:14:17.830 "thread": "nvmf_tgt_poll_group_000", 00:14:17.830 "listen_address": { 00:14:17.830 "trtype": "TCP", 00:14:17.830 "adrfam": "IPv4", 00:14:17.830 "traddr": "10.0.0.2", 00:14:17.830 "trsvcid": "4420" 00:14:17.830 }, 00:14:17.830 "peer_address": { 00:14:17.830 "trtype": "TCP", 00:14:17.830 "adrfam": "IPv4", 00:14:17.830 "traddr": "10.0.0.1", 00:14:17.830 "trsvcid": "40106" 00:14:17.830 }, 00:14:17.830 "auth": { 00:14:17.830 "state": "completed", 00:14:17.830 "digest": "sha256", 00:14:17.830 "dhgroup": "ffdhe3072" 00:14:17.830 } 00:14:17.830 } 00:14:17.830 ]' 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.830 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.089 13:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.026 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.285 13:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.854 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.854 { 00:14:19.854 "cntlid": 23, 00:14:19.854 "qid": 0, 00:14:19.854 "state": "enabled", 00:14:19.854 "thread": "nvmf_tgt_poll_group_000", 00:14:19.854 "listen_address": { 00:14:19.854 "trtype": "TCP", 00:14:19.854 "adrfam": "IPv4", 00:14:19.854 "traddr": "10.0.0.2", 00:14:19.854 "trsvcid": "4420" 00:14:19.854 }, 00:14:19.854 "peer_address": { 00:14:19.854 "trtype": "TCP", 00:14:19.854 "adrfam": "IPv4", 00:14:19.854 "traddr": "10.0.0.1", 00:14:19.854 "trsvcid": "58124" 00:14:19.854 }, 00:14:19.854 "auth": { 00:14:19.854 "state": "completed", 00:14:19.854 "digest": "sha256", 00:14:19.854 "dhgroup": "ffdhe3072" 00:14:19.854 } 00:14:19.854 } 00:14:19.854 ]' 00:14:19.854 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.113 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.113 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.113 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.113 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.113 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.113 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.113 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.371 13:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.338 13:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.596 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.165 00:14:22.165 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.165 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.165 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.423 { 00:14:22.423 "cntlid": 25, 00:14:22.423 "qid": 0, 00:14:22.423 "state": "enabled", 00:14:22.423 "thread": "nvmf_tgt_poll_group_000", 00:14:22.423 "listen_address": { 00:14:22.423 "trtype": "TCP", 00:14:22.423 "adrfam": "IPv4", 00:14:22.423 "traddr": "10.0.0.2", 00:14:22.423 "trsvcid": "4420" 00:14:22.423 }, 00:14:22.423 "peer_address": { 00:14:22.423 "trtype": "TCP", 00:14:22.423 "adrfam": "IPv4", 00:14:22.423 "traddr": "10.0.0.1", 00:14:22.423 "trsvcid": "58146" 00:14:22.423 }, 00:14:22.423 "auth": { 00:14:22.423 "state": "completed", 00:14:22.423 "digest": "sha256", 00:14:22.423 "dhgroup": "ffdhe4096" 00:14:22.423 } 00:14:22.423 } 00:14:22.423 ]' 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:22.423 13:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.423 13:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.423 13:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.423 13:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.681 13:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.618 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.876 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.445 00:14:24.445 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.445 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.445 13:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.703 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.703 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.703 13:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.703 13:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.703 13:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.703 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.703 { 00:14:24.703 "cntlid": 27, 00:14:24.703 "qid": 0, 00:14:24.703 "state": "enabled", 00:14:24.703 "thread": "nvmf_tgt_poll_group_000", 00:14:24.703 "listen_address": { 00:14:24.703 "trtype": "TCP", 00:14:24.703 "adrfam": "IPv4", 00:14:24.704 "traddr": "10.0.0.2", 00:14:24.704 "trsvcid": "4420" 00:14:24.704 }, 00:14:24.704 "peer_address": { 00:14:24.704 "trtype": "TCP", 00:14:24.704 "adrfam": "IPv4", 00:14:24.704 "traddr": "10.0.0.1", 00:14:24.704 "trsvcid": "58152" 00:14:24.704 }, 00:14:24.704 "auth": { 00:14:24.704 "state": "completed", 00:14:24.704 "digest": "sha256", 00:14:24.704 "dhgroup": "ffdhe4096" 00:14:24.704 } 00:14:24.704 } 00:14:24.704 ]' 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.704 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.962 13:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.898 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.156 13:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.724 00:14:26.724 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.725 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.725 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.983 { 00:14:26.983 "cntlid": 29, 00:14:26.983 "qid": 0, 00:14:26.983 "state": "enabled", 00:14:26.983 "thread": "nvmf_tgt_poll_group_000", 00:14:26.983 "listen_address": { 00:14:26.983 "trtype": "TCP", 00:14:26.983 "adrfam": "IPv4", 00:14:26.983 "traddr": "10.0.0.2", 00:14:26.983 "trsvcid": "4420" 00:14:26.983 }, 00:14:26.983 "peer_address": { 00:14:26.983 "trtype": "TCP", 00:14:26.983 "adrfam": "IPv4", 00:14:26.983 "traddr": "10.0.0.1", 00:14:26.983 "trsvcid": "58178" 00:14:26.983 }, 00:14:26.983 "auth": { 00:14:26.983 "state": "completed", 00:14:26.983 "digest": "sha256", 00:14:26.983 "dhgroup": "ffdhe4096" 00:14:26.983 } 00:14:26.983 } 00:14:26.983 ]' 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.983 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.243 13:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:14:28.177 13:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.435 13:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.435 13:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.435 13:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.435 13:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.435 13:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.435 13:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.435 13:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.691 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.948 00:14:28.948 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.948 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.948 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.240 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.240 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.240 13:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.240 13:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.497 13:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.497 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.497 { 00:14:29.497 "cntlid": 31, 00:14:29.497 "qid": 0, 00:14:29.497 "state": "enabled", 00:14:29.497 "thread": "nvmf_tgt_poll_group_000", 00:14:29.497 "listen_address": { 00:14:29.497 "trtype": "TCP", 00:14:29.497 "adrfam": "IPv4", 00:14:29.497 "traddr": "10.0.0.2", 00:14:29.497 "trsvcid": "4420" 00:14:29.497 }, 00:14:29.497 "peer_address": { 00:14:29.497 "trtype": "TCP", 00:14:29.497 "adrfam": "IPv4", 00:14:29.497 "traddr": "10.0.0.1", 00:14:29.497 "trsvcid": "58212" 00:14:29.497 }, 00:14:29.497 "auth": { 00:14:29.497 "state": "completed", 00:14:29.497 "digest": "sha256", 00:14:29.497 "dhgroup": "ffdhe4096" 00:14:29.497 } 00:14:29.497 } 00:14:29.497 ]' 00:14:29.497 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.497 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.497 13:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.497 13:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.497 13:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.497 13:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.497 13:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.497 13:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.754 13:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.684 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.942 13:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.508 00:14:31.508 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.508 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.508 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.766 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.766 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.766 13:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.766 13:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.766 13:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.766 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.766 { 00:14:31.766 "cntlid": 33, 00:14:31.766 "qid": 0, 00:14:31.766 "state": "enabled", 00:14:31.766 "thread": "nvmf_tgt_poll_group_000", 00:14:31.766 "listen_address": { 00:14:31.766 "trtype": "TCP", 00:14:31.766 "adrfam": "IPv4", 00:14:31.766 "traddr": "10.0.0.2", 00:14:31.766 "trsvcid": "4420" 00:14:31.766 }, 00:14:31.766 "peer_address": { 00:14:31.766 "trtype": "TCP", 00:14:31.766 "adrfam": "IPv4", 00:14:31.766 "traddr": "10.0.0.1", 00:14:31.766 "trsvcid": "52486" 00:14:31.766 }, 00:14:31.766 "auth": { 00:14:31.766 "state": "completed", 00:14:31.766 "digest": "sha256", 00:14:31.766 "dhgroup": "ffdhe6144" 00:14:31.766 } 00:14:31.766 } 00:14:31.766 ]' 00:14:31.766 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.023 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.023 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.023 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.023 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.023 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.023 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.023 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.281 13:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.217 13:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.475 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.044 00:14:34.044 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.044 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.044 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.302 { 00:14:34.302 "cntlid": 35, 00:14:34.302 "qid": 0, 00:14:34.302 "state": "enabled", 00:14:34.302 "thread": "nvmf_tgt_poll_group_000", 00:14:34.302 "listen_address": { 00:14:34.302 "trtype": "TCP", 00:14:34.302 "adrfam": "IPv4", 00:14:34.302 "traddr": "10.0.0.2", 00:14:34.302 "trsvcid": "4420" 00:14:34.302 }, 00:14:34.302 "peer_address": { 00:14:34.302 "trtype": "TCP", 00:14:34.302 "adrfam": "IPv4", 00:14:34.302 "traddr": "10.0.0.1", 00:14:34.302 "trsvcid": "52494" 00:14:34.302 }, 00:14:34.302 "auth": { 00:14:34.302 "state": "completed", 00:14:34.302 "digest": "sha256", 00:14:34.302 "dhgroup": "ffdhe6144" 00:14:34.302 } 00:14:34.302 } 00:14:34.302 ]' 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.302 13:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.560 13:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.560 13:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.560 13:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.560 13:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.560 13:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.820 13:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.765 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.024 13:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.594 00:14:36.594 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.594 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.594 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.852 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.852 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.852 13:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.852 13:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.852 13:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.852 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.852 { 00:14:36.852 "cntlid": 37, 00:14:36.852 "qid": 0, 00:14:36.852 "state": "enabled", 00:14:36.852 "thread": "nvmf_tgt_poll_group_000", 00:14:36.852 "listen_address": { 00:14:36.852 "trtype": "TCP", 00:14:36.852 "adrfam": "IPv4", 00:14:36.852 "traddr": "10.0.0.2", 00:14:36.852 "trsvcid": "4420" 00:14:36.852 }, 00:14:36.852 "peer_address": { 00:14:36.852 "trtype": "TCP", 00:14:36.852 "adrfam": "IPv4", 00:14:36.852 "traddr": "10.0.0.1", 00:14:36.852 "trsvcid": "52524" 00:14:36.853 }, 00:14:36.853 "auth": { 00:14:36.853 "state": "completed", 00:14:36.853 "digest": "sha256", 00:14:36.853 "dhgroup": "ffdhe6144" 00:14:36.853 } 00:14:36.853 } 00:14:36.853 ]' 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.853 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.111 13:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.046 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.304 13:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.872 00:14:38.872 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.872 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.872 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.130 { 00:14:39.130 "cntlid": 39, 00:14:39.130 "qid": 0, 00:14:39.130 "state": "enabled", 00:14:39.130 "thread": "nvmf_tgt_poll_group_000", 00:14:39.130 "listen_address": { 00:14:39.130 "trtype": "TCP", 00:14:39.130 "adrfam": "IPv4", 00:14:39.130 "traddr": "10.0.0.2", 00:14:39.130 "trsvcid": "4420" 00:14:39.130 }, 00:14:39.130 "peer_address": { 00:14:39.130 "trtype": "TCP", 00:14:39.130 "adrfam": "IPv4", 00:14:39.130 "traddr": "10.0.0.1", 00:14:39.130 "trsvcid": "52552" 00:14:39.130 }, 00:14:39.130 "auth": { 00:14:39.130 "state": "completed", 00:14:39.130 "digest": "sha256", 00:14:39.130 "dhgroup": "ffdhe6144" 00:14:39.130 } 00:14:39.130 } 00:14:39.130 ]' 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.130 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.387 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.387 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.387 13:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.644 13:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:40.580 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:40.837 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.838 13:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.773 00:14:41.773 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.773 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.773 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.030 { 00:14:42.030 "cntlid": 41, 00:14:42.030 "qid": 0, 00:14:42.030 "state": "enabled", 00:14:42.030 "thread": "nvmf_tgt_poll_group_000", 00:14:42.030 "listen_address": { 00:14:42.030 "trtype": "TCP", 00:14:42.030 "adrfam": "IPv4", 00:14:42.030 "traddr": "10.0.0.2", 00:14:42.030 "trsvcid": "4420" 00:14:42.030 }, 00:14:42.030 "peer_address": { 00:14:42.030 "trtype": "TCP", 00:14:42.030 "adrfam": "IPv4", 00:14:42.030 "traddr": "10.0.0.1", 00:14:42.030 "trsvcid": "33808" 00:14:42.030 }, 00:14:42.030 "auth": { 00:14:42.030 "state": "completed", 00:14:42.030 "digest": "sha256", 00:14:42.030 "dhgroup": "ffdhe8192" 00:14:42.030 } 00:14:42.030 } 00:14:42.030 ]' 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.030 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.289 13:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.666 13:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.666 13:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.601 00:14:44.601 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.601 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.601 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.859 { 00:14:44.859 "cntlid": 43, 00:14:44.859 "qid": 0, 00:14:44.859 "state": "enabled", 00:14:44.859 "thread": "nvmf_tgt_poll_group_000", 00:14:44.859 "listen_address": { 00:14:44.859 "trtype": "TCP", 00:14:44.859 "adrfam": "IPv4", 00:14:44.859 "traddr": "10.0.0.2", 00:14:44.859 "trsvcid": "4420" 00:14:44.859 }, 00:14:44.859 "peer_address": { 00:14:44.859 "trtype": "TCP", 00:14:44.859 "adrfam": "IPv4", 00:14:44.859 "traddr": "10.0.0.1", 00:14:44.859 "trsvcid": "33834" 00:14:44.859 }, 00:14:44.859 "auth": { 00:14:44.859 "state": "completed", 00:14:44.859 "digest": "sha256", 00:14:44.859 "dhgroup": "ffdhe8192" 00:14:44.859 } 00:14:44.859 } 00:14:44.859 ]' 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.859 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.429 13:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.362 13:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.621 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.585 00:14:47.585 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.585 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.585 13:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.585 { 00:14:47.585 "cntlid": 45, 00:14:47.585 "qid": 0, 00:14:47.585 "state": "enabled", 00:14:47.585 "thread": "nvmf_tgt_poll_group_000", 00:14:47.585 "listen_address": { 00:14:47.585 "trtype": "TCP", 00:14:47.585 "adrfam": "IPv4", 00:14:47.585 "traddr": "10.0.0.2", 00:14:47.585 "trsvcid": "4420" 00:14:47.585 }, 00:14:47.585 "peer_address": { 00:14:47.585 "trtype": "TCP", 00:14:47.585 "adrfam": "IPv4", 00:14:47.585 "traddr": "10.0.0.1", 00:14:47.585 "trsvcid": "33858" 00:14:47.585 }, 00:14:47.585 "auth": { 00:14:47.585 "state": "completed", 00:14:47.585 "digest": "sha256", 00:14:47.585 "dhgroup": "ffdhe8192" 00:14:47.585 } 00:14:47.585 } 00:14:47.585 ]' 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.585 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.842 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.842 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.842 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.842 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.842 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.100 13:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.033 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.290 13:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.223 00:14:50.223 13:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.223 13:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.223 13:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.479 { 00:14:50.479 "cntlid": 47, 00:14:50.479 "qid": 0, 00:14:50.479 "state": "enabled", 00:14:50.479 "thread": "nvmf_tgt_poll_group_000", 00:14:50.479 "listen_address": { 00:14:50.479 "trtype": "TCP", 00:14:50.479 "adrfam": "IPv4", 00:14:50.479 "traddr": "10.0.0.2", 00:14:50.479 "trsvcid": "4420" 00:14:50.479 }, 00:14:50.479 "peer_address": { 00:14:50.479 "trtype": "TCP", 00:14:50.479 "adrfam": "IPv4", 00:14:50.479 "traddr": "10.0.0.1", 00:14:50.479 "trsvcid": "59844" 00:14:50.479 }, 00:14:50.479 "auth": { 00:14:50.479 "state": "completed", 00:14:50.479 "digest": "sha256", 00:14:50.479 "dhgroup": "ffdhe8192" 00:14:50.479 } 00:14:50.479 } 00:14:50.479 ]' 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.479 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.046 13:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:51.984 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.243 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.501 00:14:52.501 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.501 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.501 13:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.760 { 00:14:52.760 "cntlid": 49, 00:14:52.760 "qid": 0, 00:14:52.760 "state": "enabled", 00:14:52.760 "thread": "nvmf_tgt_poll_group_000", 00:14:52.760 "listen_address": { 00:14:52.760 "trtype": "TCP", 00:14:52.760 "adrfam": "IPv4", 00:14:52.760 "traddr": "10.0.0.2", 00:14:52.760 "trsvcid": "4420" 00:14:52.760 }, 00:14:52.760 "peer_address": { 00:14:52.760 "trtype": "TCP", 00:14:52.760 "adrfam": "IPv4", 00:14:52.760 "traddr": "10.0.0.1", 00:14:52.760 "trsvcid": "59870" 00:14:52.760 }, 00:14:52.760 "auth": { 00:14:52.760 "state": "completed", 00:14:52.760 "digest": "sha384", 00:14:52.760 "dhgroup": "null" 00:14:52.760 } 00:14:52.760 } 00:14:52.760 ]' 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.760 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.019 13:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:14:53.954 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.954 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.954 13:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.954 13:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.213 13:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.213 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.213 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.213 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.473 13:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.731 00:14:54.731 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.731 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.731 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.990 { 00:14:54.990 "cntlid": 51, 00:14:54.990 "qid": 0, 00:14:54.990 "state": "enabled", 00:14:54.990 "thread": "nvmf_tgt_poll_group_000", 00:14:54.990 "listen_address": { 00:14:54.990 "trtype": "TCP", 00:14:54.990 "adrfam": "IPv4", 00:14:54.990 "traddr": "10.0.0.2", 00:14:54.990 "trsvcid": "4420" 00:14:54.990 }, 00:14:54.990 "peer_address": { 00:14:54.990 "trtype": "TCP", 00:14:54.990 "adrfam": "IPv4", 00:14:54.990 "traddr": "10.0.0.1", 00:14:54.990 "trsvcid": "59914" 00:14:54.990 }, 00:14:54.990 "auth": { 00:14:54.990 "state": "completed", 00:14:54.990 "digest": "sha384", 00:14:54.990 "dhgroup": "null" 00:14:54.990 } 00:14:54.990 } 00:14:54.990 ]' 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.990 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.248 13:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.186 13:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.444 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.702 00:14:56.702 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.702 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.702 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.268 { 00:14:57.268 "cntlid": 53, 00:14:57.268 "qid": 0, 00:14:57.268 "state": "enabled", 00:14:57.268 "thread": "nvmf_tgt_poll_group_000", 00:14:57.268 "listen_address": { 00:14:57.268 "trtype": "TCP", 00:14:57.268 "adrfam": "IPv4", 00:14:57.268 "traddr": "10.0.0.2", 00:14:57.268 "trsvcid": "4420" 00:14:57.268 }, 00:14:57.268 "peer_address": { 00:14:57.268 "trtype": "TCP", 00:14:57.268 "adrfam": "IPv4", 00:14:57.268 "traddr": "10.0.0.1", 00:14:57.268 "trsvcid": "59926" 00:14:57.268 }, 00:14:57.268 "auth": { 00:14:57.268 "state": "completed", 00:14:57.268 "digest": "sha384", 00:14:57.268 "dhgroup": "null" 00:14:57.268 } 00:14:57.268 } 00:14:57.268 ]' 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.268 13:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.526 13:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.462 13:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.719 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.720 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.976 00:14:58.976 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.976 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.976 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.234 { 00:14:59.234 "cntlid": 55, 00:14:59.234 "qid": 0, 00:14:59.234 "state": "enabled", 00:14:59.234 "thread": "nvmf_tgt_poll_group_000", 00:14:59.234 "listen_address": { 00:14:59.234 "trtype": "TCP", 00:14:59.234 "adrfam": "IPv4", 00:14:59.234 "traddr": "10.0.0.2", 00:14:59.234 "trsvcid": "4420" 00:14:59.234 }, 00:14:59.234 "peer_address": { 00:14:59.234 "trtype": "TCP", 00:14:59.234 "adrfam": "IPv4", 00:14:59.234 "traddr": "10.0.0.1", 00:14:59.234 "trsvcid": "59962" 00:14:59.234 }, 00:14:59.234 "auth": { 00:14:59.234 "state": "completed", 00:14:59.234 "digest": "sha384", 00:14:59.234 "dhgroup": "null" 00:14:59.234 } 00:14:59.234 } 00:14:59.234 ]' 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.234 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.490 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:59.490 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.491 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.491 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.491 13:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.748 13:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.683 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.941 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.199 00:15:01.199 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.199 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.199 13:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.455 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.455 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.455 13:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.455 13:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 13:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.455 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.455 { 00:15:01.455 "cntlid": 57, 00:15:01.455 "qid": 0, 00:15:01.456 "state": "enabled", 00:15:01.456 "thread": "nvmf_tgt_poll_group_000", 00:15:01.456 "listen_address": { 00:15:01.456 "trtype": "TCP", 00:15:01.456 "adrfam": "IPv4", 00:15:01.456 "traddr": "10.0.0.2", 00:15:01.456 "trsvcid": "4420" 00:15:01.456 }, 00:15:01.456 "peer_address": { 00:15:01.456 "trtype": "TCP", 00:15:01.456 "adrfam": "IPv4", 00:15:01.456 "traddr": "10.0.0.1", 00:15:01.456 "trsvcid": "54682" 00:15:01.456 }, 00:15:01.456 "auth": { 00:15:01.456 "state": "completed", 00:15:01.456 "digest": "sha384", 00:15:01.456 "dhgroup": "ffdhe2048" 00:15:01.456 } 00:15:01.456 } 00:15:01.456 ]' 00:15:01.456 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.456 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.456 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.456 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.456 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.712 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.712 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.712 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.969 13:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.904 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.163 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.421 00:15:03.421 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.421 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.421 13:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.679 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.679 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.679 13:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.679 13:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.679 13:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.679 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.679 { 00:15:03.679 "cntlid": 59, 00:15:03.679 "qid": 0, 00:15:03.679 "state": "enabled", 00:15:03.679 "thread": "nvmf_tgt_poll_group_000", 00:15:03.679 "listen_address": { 00:15:03.679 "trtype": "TCP", 00:15:03.679 "adrfam": "IPv4", 00:15:03.679 "traddr": "10.0.0.2", 00:15:03.679 "trsvcid": "4420" 00:15:03.679 }, 00:15:03.679 "peer_address": { 00:15:03.679 "trtype": "TCP", 00:15:03.679 "adrfam": "IPv4", 00:15:03.679 "traddr": "10.0.0.1", 00:15:03.679 "trsvcid": "54702" 00:15:03.679 }, 00:15:03.679 "auth": { 00:15:03.679 "state": "completed", 00:15:03.680 "digest": "sha384", 00:15:03.680 "dhgroup": "ffdhe2048" 00:15:03.680 } 00:15:03.680 } 00:15:03.680 ]' 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.680 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.940 13:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.317 13:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.575 00:15:05.575 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.575 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.575 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.833 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.833 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.833 13:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.833 13:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.833 13:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.833 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.833 { 00:15:05.833 "cntlid": 61, 00:15:05.833 "qid": 0, 00:15:05.833 "state": "enabled", 00:15:05.833 "thread": "nvmf_tgt_poll_group_000", 00:15:05.833 "listen_address": { 00:15:05.833 "trtype": "TCP", 00:15:05.833 "adrfam": "IPv4", 00:15:05.834 "traddr": "10.0.0.2", 00:15:05.834 "trsvcid": "4420" 00:15:05.834 }, 00:15:05.834 "peer_address": { 00:15:05.834 "trtype": "TCP", 00:15:05.834 "adrfam": "IPv4", 00:15:05.834 "traddr": "10.0.0.1", 00:15:05.834 "trsvcid": "54732" 00:15:05.834 }, 00:15:05.834 "auth": { 00:15:05.834 "state": "completed", 00:15:05.834 "digest": "sha384", 00:15:05.834 "dhgroup": "ffdhe2048" 00:15:05.834 } 00:15:05.834 } 00:15:05.834 ]' 00:15:05.834 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.834 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.834 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.091 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.091 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.091 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.091 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.091 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.348 13:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:15:07.281 13:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.281 13:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.281 13:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.281 13:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.281 13:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.281 13:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.282 13:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.282 13:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.539 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.796 00:15:07.796 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.796 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.796 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.052 { 00:15:08.052 "cntlid": 63, 00:15:08.052 "qid": 0, 00:15:08.052 "state": "enabled", 00:15:08.052 "thread": "nvmf_tgt_poll_group_000", 00:15:08.052 "listen_address": { 00:15:08.052 "trtype": "TCP", 00:15:08.052 "adrfam": "IPv4", 00:15:08.052 "traddr": "10.0.0.2", 00:15:08.052 "trsvcid": "4420" 00:15:08.052 }, 00:15:08.052 "peer_address": { 00:15:08.052 "trtype": "TCP", 00:15:08.052 "adrfam": "IPv4", 00:15:08.052 "traddr": "10.0.0.1", 00:15:08.052 "trsvcid": "54758" 00:15:08.052 }, 00:15:08.052 "auth": { 00:15:08.052 "state": "completed", 00:15:08.052 "digest": "sha384", 00:15:08.052 "dhgroup": "ffdhe2048" 00:15:08.052 } 00:15:08.052 } 00:15:08.052 ]' 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.052 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.310 13:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:15:09.241 13:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.499 13:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.756 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.013 00:15:10.014 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.014 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.014 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.271 { 00:15:10.271 "cntlid": 65, 00:15:10.271 "qid": 0, 00:15:10.271 "state": "enabled", 00:15:10.271 "thread": "nvmf_tgt_poll_group_000", 00:15:10.271 "listen_address": { 00:15:10.271 "trtype": "TCP", 00:15:10.271 "adrfam": "IPv4", 00:15:10.271 "traddr": "10.0.0.2", 00:15:10.271 "trsvcid": "4420" 00:15:10.271 }, 00:15:10.271 "peer_address": { 00:15:10.271 "trtype": "TCP", 00:15:10.271 "adrfam": "IPv4", 00:15:10.271 "traddr": "10.0.0.1", 00:15:10.271 "trsvcid": "60396" 00:15:10.271 }, 00:15:10.271 "auth": { 00:15:10.271 "state": "completed", 00:15:10.271 "digest": "sha384", 00:15:10.271 "dhgroup": "ffdhe3072" 00:15:10.271 } 00:15:10.271 } 00:15:10.271 ]' 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.271 13:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.531 13:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.471 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.728 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.294 00:15:12.294 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.294 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.294 13:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.552 { 00:15:12.552 "cntlid": 67, 00:15:12.552 "qid": 0, 00:15:12.552 "state": "enabled", 00:15:12.552 "thread": "nvmf_tgt_poll_group_000", 00:15:12.552 "listen_address": { 00:15:12.552 "trtype": "TCP", 00:15:12.552 "adrfam": "IPv4", 00:15:12.552 "traddr": "10.0.0.2", 00:15:12.552 "trsvcid": "4420" 00:15:12.552 }, 00:15:12.552 "peer_address": { 00:15:12.552 "trtype": "TCP", 00:15:12.552 "adrfam": "IPv4", 00:15:12.552 "traddr": "10.0.0.1", 00:15:12.552 "trsvcid": "60406" 00:15:12.552 }, 00:15:12.552 "auth": { 00:15:12.552 "state": "completed", 00:15:12.552 "digest": "sha384", 00:15:12.552 "dhgroup": "ffdhe3072" 00:15:12.552 } 00:15:12.552 } 00:15:12.552 ]' 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.552 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.809 13:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.748 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.006 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.264 00:15:14.264 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.264 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.264 13:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.523 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.523 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.523 13:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.523 13:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.523 13:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.523 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.523 { 00:15:14.523 "cntlid": 69, 00:15:14.523 "qid": 0, 00:15:14.523 "state": "enabled", 00:15:14.523 "thread": "nvmf_tgt_poll_group_000", 00:15:14.523 "listen_address": { 00:15:14.523 "trtype": "TCP", 00:15:14.523 "adrfam": "IPv4", 00:15:14.523 "traddr": "10.0.0.2", 00:15:14.523 "trsvcid": "4420" 00:15:14.523 }, 00:15:14.523 "peer_address": { 00:15:14.523 "trtype": "TCP", 00:15:14.523 "adrfam": "IPv4", 00:15:14.523 "traddr": "10.0.0.1", 00:15:14.523 "trsvcid": "60432" 00:15:14.523 }, 00:15:14.523 "auth": { 00:15:14.523 "state": "completed", 00:15:14.523 "digest": "sha384", 00:15:14.523 "dhgroup": "ffdhe3072" 00:15:14.523 } 00:15:14.523 } 00:15:14.523 ]' 00:15:14.523 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.781 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.781 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.781 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.781 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.781 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.781 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.781 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.039 13:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.975 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.232 13:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.490 00:15:16.490 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.490 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.490 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.057 { 00:15:17.057 "cntlid": 71, 00:15:17.057 "qid": 0, 00:15:17.057 "state": "enabled", 00:15:17.057 "thread": "nvmf_tgt_poll_group_000", 00:15:17.057 "listen_address": { 00:15:17.057 "trtype": "TCP", 00:15:17.057 "adrfam": "IPv4", 00:15:17.057 "traddr": "10.0.0.2", 00:15:17.057 "trsvcid": "4420" 00:15:17.057 }, 00:15:17.057 "peer_address": { 00:15:17.057 "trtype": "TCP", 00:15:17.057 "adrfam": "IPv4", 00:15:17.057 "traddr": "10.0.0.1", 00:15:17.057 "trsvcid": "60456" 00:15:17.057 }, 00:15:17.057 "auth": { 00:15:17.057 "state": "completed", 00:15:17.057 "digest": "sha384", 00:15:17.057 "dhgroup": "ffdhe3072" 00:15:17.057 } 00:15:17.057 } 00:15:17.057 ]' 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.057 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.313 13:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.248 13:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.505 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.072 00:15:19.072 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.072 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.072 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.331 { 00:15:19.331 "cntlid": 73, 00:15:19.331 "qid": 0, 00:15:19.331 "state": "enabled", 00:15:19.331 "thread": "nvmf_tgt_poll_group_000", 00:15:19.331 "listen_address": { 00:15:19.331 "trtype": "TCP", 00:15:19.331 "adrfam": "IPv4", 00:15:19.331 "traddr": "10.0.0.2", 00:15:19.331 "trsvcid": "4420" 00:15:19.331 }, 00:15:19.331 "peer_address": { 00:15:19.331 "trtype": "TCP", 00:15:19.331 "adrfam": "IPv4", 00:15:19.331 "traddr": "10.0.0.1", 00:15:19.331 "trsvcid": "60466" 00:15:19.331 }, 00:15:19.331 "auth": { 00:15:19.331 "state": "completed", 00:15:19.331 "digest": "sha384", 00:15:19.331 "dhgroup": "ffdhe4096" 00:15:19.331 } 00:15:19.331 } 00:15:19.331 ]' 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.331 13:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.589 13:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.524 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.782 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.350 00:15:21.350 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.350 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.350 13:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.609 { 00:15:21.609 "cntlid": 75, 00:15:21.609 "qid": 0, 00:15:21.609 "state": "enabled", 00:15:21.609 "thread": "nvmf_tgt_poll_group_000", 00:15:21.609 "listen_address": { 00:15:21.609 "trtype": "TCP", 00:15:21.609 "adrfam": "IPv4", 00:15:21.609 "traddr": "10.0.0.2", 00:15:21.609 "trsvcid": "4420" 00:15:21.609 }, 00:15:21.609 "peer_address": { 00:15:21.609 "trtype": "TCP", 00:15:21.609 "adrfam": "IPv4", 00:15:21.609 "traddr": "10.0.0.1", 00:15:21.609 "trsvcid": "48704" 00:15:21.609 }, 00:15:21.609 "auth": { 00:15:21.609 "state": "completed", 00:15:21.609 "digest": "sha384", 00:15:21.609 "dhgroup": "ffdhe4096" 00:15:21.609 } 00:15:21.609 } 00:15:21.609 ]' 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.609 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.867 13:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.800 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.058 13:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.626 00:15:23.626 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.626 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.626 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.886 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.886 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.886 13:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.886 13:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.886 13:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.886 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.886 { 00:15:23.886 "cntlid": 77, 00:15:23.886 "qid": 0, 00:15:23.886 "state": "enabled", 00:15:23.886 "thread": "nvmf_tgt_poll_group_000", 00:15:23.886 "listen_address": { 00:15:23.886 "trtype": "TCP", 00:15:23.886 "adrfam": "IPv4", 00:15:23.886 "traddr": "10.0.0.2", 00:15:23.886 "trsvcid": "4420" 00:15:23.886 }, 00:15:23.886 "peer_address": { 00:15:23.887 "trtype": "TCP", 00:15:23.887 "adrfam": "IPv4", 00:15:23.887 "traddr": "10.0.0.1", 00:15:23.887 "trsvcid": "48746" 00:15:23.887 }, 00:15:23.887 "auth": { 00:15:23.887 "state": "completed", 00:15:23.887 "digest": "sha384", 00:15:23.887 "dhgroup": "ffdhe4096" 00:15:23.887 } 00:15:23.887 } 00:15:23.887 ]' 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.887 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.159 13:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.097 13:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.355 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.922 00:15:25.923 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.923 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.923 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.180 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.180 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.180 13:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.181 { 00:15:26.181 "cntlid": 79, 00:15:26.181 "qid": 0, 00:15:26.181 "state": "enabled", 00:15:26.181 "thread": "nvmf_tgt_poll_group_000", 00:15:26.181 "listen_address": { 00:15:26.181 "trtype": "TCP", 00:15:26.181 "adrfam": "IPv4", 00:15:26.181 "traddr": "10.0.0.2", 00:15:26.181 "trsvcid": "4420" 00:15:26.181 }, 00:15:26.181 "peer_address": { 00:15:26.181 "trtype": "TCP", 00:15:26.181 "adrfam": "IPv4", 00:15:26.181 "traddr": "10.0.0.1", 00:15:26.181 "trsvcid": "48770" 00:15:26.181 }, 00:15:26.181 "auth": { 00:15:26.181 "state": "completed", 00:15:26.181 "digest": "sha384", 00:15:26.181 "dhgroup": "ffdhe4096" 00:15:26.181 } 00:15:26.181 } 00:15:26.181 ]' 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.181 13:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.440 13:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.379 13:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.635 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.201 00:15:28.201 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.201 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.201 13:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.458 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.458 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.458 13:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.458 13:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.458 13:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.458 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.458 { 00:15:28.458 "cntlid": 81, 00:15:28.458 "qid": 0, 00:15:28.458 "state": "enabled", 00:15:28.458 "thread": "nvmf_tgt_poll_group_000", 00:15:28.458 "listen_address": { 00:15:28.458 "trtype": "TCP", 00:15:28.458 "adrfam": "IPv4", 00:15:28.458 "traddr": "10.0.0.2", 00:15:28.458 "trsvcid": "4420" 00:15:28.458 }, 00:15:28.458 "peer_address": { 00:15:28.458 "trtype": "TCP", 00:15:28.458 "adrfam": "IPv4", 00:15:28.458 "traddr": "10.0.0.1", 00:15:28.458 "trsvcid": "48794" 00:15:28.458 }, 00:15:28.458 "auth": { 00:15:28.459 "state": "completed", 00:15:28.459 "digest": "sha384", 00:15:28.459 "dhgroup": "ffdhe6144" 00:15:28.459 } 00:15:28.459 } 00:15:28.459 ]' 00:15:28.459 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.716 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.716 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.716 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.716 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.716 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.716 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.716 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.972 13:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.906 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.163 13:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.730 00:15:30.730 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.730 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.730 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.988 { 00:15:30.988 "cntlid": 83, 00:15:30.988 "qid": 0, 00:15:30.988 "state": "enabled", 00:15:30.988 "thread": "nvmf_tgt_poll_group_000", 00:15:30.988 "listen_address": { 00:15:30.988 "trtype": "TCP", 00:15:30.988 "adrfam": "IPv4", 00:15:30.988 "traddr": "10.0.0.2", 00:15:30.988 "trsvcid": "4420" 00:15:30.988 }, 00:15:30.988 "peer_address": { 00:15:30.988 "trtype": "TCP", 00:15:30.988 "adrfam": "IPv4", 00:15:30.988 "traddr": "10.0.0.1", 00:15:30.988 "trsvcid": "46124" 00:15:30.988 }, 00:15:30.988 "auth": { 00:15:30.988 "state": "completed", 00:15:30.988 "digest": "sha384", 00:15:30.988 "dhgroup": "ffdhe6144" 00:15:30.988 } 00:15:30.988 } 00:15:30.988 ]' 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.988 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.246 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.246 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.246 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.246 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.246 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.506 13:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.439 13:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.696 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.270 00:15:33.270 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.270 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.270 13:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.528 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.528 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.528 13:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.528 13:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.528 13:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.528 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.528 { 00:15:33.528 "cntlid": 85, 00:15:33.528 "qid": 0, 00:15:33.528 "state": "enabled", 00:15:33.528 "thread": "nvmf_tgt_poll_group_000", 00:15:33.528 "listen_address": { 00:15:33.528 "trtype": "TCP", 00:15:33.528 "adrfam": "IPv4", 00:15:33.528 "traddr": "10.0.0.2", 00:15:33.528 "trsvcid": "4420" 00:15:33.528 }, 00:15:33.528 "peer_address": { 00:15:33.528 "trtype": "TCP", 00:15:33.528 "adrfam": "IPv4", 00:15:33.528 "traddr": "10.0.0.1", 00:15:33.528 "trsvcid": "46138" 00:15:33.528 }, 00:15:33.528 "auth": { 00:15:33.528 "state": "completed", 00:15:33.528 "digest": "sha384", 00:15:33.528 "dhgroup": "ffdhe6144" 00:15:33.528 } 00:15:33.528 } 00:15:33.528 ]' 00:15:33.528 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.529 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.529 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.529 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.529 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.529 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.529 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.529 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.791 13:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.729 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.987 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.988 13:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:35.556 00:15:35.557 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.557 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.557 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.814 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.814 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.814 13:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.814 13:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.073 { 00:15:36.073 "cntlid": 87, 00:15:36.073 "qid": 0, 00:15:36.073 "state": "enabled", 00:15:36.073 "thread": "nvmf_tgt_poll_group_000", 00:15:36.073 "listen_address": { 00:15:36.073 "trtype": "TCP", 00:15:36.073 "adrfam": "IPv4", 00:15:36.073 "traddr": "10.0.0.2", 00:15:36.073 "trsvcid": "4420" 00:15:36.073 }, 00:15:36.073 "peer_address": { 00:15:36.073 "trtype": "TCP", 00:15:36.073 "adrfam": "IPv4", 00:15:36.073 "traddr": "10.0.0.1", 00:15:36.073 "trsvcid": "46164" 00:15:36.073 }, 00:15:36.073 "auth": { 00:15:36.073 "state": "completed", 00:15:36.073 "digest": "sha384", 00:15:36.073 "dhgroup": "ffdhe6144" 00:15:36.073 } 00:15:36.073 } 00:15:36.073 ]' 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.073 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.331 13:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.270 13:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.528 13:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.463 00:15:38.463 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.463 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.463 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.721 { 00:15:38.721 "cntlid": 89, 00:15:38.721 "qid": 0, 00:15:38.721 "state": "enabled", 00:15:38.721 "thread": "nvmf_tgt_poll_group_000", 00:15:38.721 "listen_address": { 00:15:38.721 "trtype": "TCP", 00:15:38.721 "adrfam": "IPv4", 00:15:38.721 "traddr": "10.0.0.2", 00:15:38.721 "trsvcid": "4420" 00:15:38.721 }, 00:15:38.721 "peer_address": { 00:15:38.721 "trtype": "TCP", 00:15:38.721 "adrfam": "IPv4", 00:15:38.721 "traddr": "10.0.0.1", 00:15:38.721 "trsvcid": "46196" 00:15:38.721 }, 00:15:38.721 "auth": { 00:15:38.721 "state": "completed", 00:15:38.721 "digest": "sha384", 00:15:38.721 "dhgroup": "ffdhe8192" 00:15:38.721 } 00:15:38.721 } 00:15:38.721 ]' 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.721 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.288 13:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.226 13:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.485 13:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.486 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.486 13:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.422 00:15:41.422 13:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.422 13:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.422 13:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.422 { 00:15:41.422 "cntlid": 91, 00:15:41.422 "qid": 0, 00:15:41.422 "state": "enabled", 00:15:41.422 "thread": "nvmf_tgt_poll_group_000", 00:15:41.422 "listen_address": { 00:15:41.422 "trtype": "TCP", 00:15:41.422 "adrfam": "IPv4", 00:15:41.422 "traddr": "10.0.0.2", 00:15:41.422 "trsvcid": "4420" 00:15:41.422 }, 00:15:41.422 "peer_address": { 00:15:41.422 "trtype": "TCP", 00:15:41.422 "adrfam": "IPv4", 00:15:41.422 "traddr": "10.0.0.1", 00:15:41.422 "trsvcid": "59694" 00:15:41.422 }, 00:15:41.422 "auth": { 00:15:41.422 "state": "completed", 00:15:41.422 "digest": "sha384", 00:15:41.422 "dhgroup": "ffdhe8192" 00:15:41.422 } 00:15:41.422 } 00:15:41.422 ]' 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.422 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.681 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.681 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.681 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.681 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.681 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.939 13:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.876 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.134 13:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.073 00:15:44.073 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.073 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.073 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.331 { 00:15:44.331 "cntlid": 93, 00:15:44.331 "qid": 0, 00:15:44.331 "state": "enabled", 00:15:44.331 "thread": "nvmf_tgt_poll_group_000", 00:15:44.331 "listen_address": { 00:15:44.331 "trtype": "TCP", 00:15:44.331 "adrfam": "IPv4", 00:15:44.331 "traddr": "10.0.0.2", 00:15:44.331 "trsvcid": "4420" 00:15:44.331 }, 00:15:44.331 "peer_address": { 00:15:44.331 "trtype": "TCP", 00:15:44.331 "adrfam": "IPv4", 00:15:44.331 "traddr": "10.0.0.1", 00:15:44.331 "trsvcid": "59710" 00:15:44.331 }, 00:15:44.331 "auth": { 00:15:44.331 "state": "completed", 00:15:44.331 "digest": "sha384", 00:15:44.331 "dhgroup": "ffdhe8192" 00:15:44.331 } 00:15:44.331 } 00:15:44.331 ]' 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.331 13:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.590 13:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:15:45.528 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.528 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.528 13:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.528 13:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.787 13:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.787 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.787 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.787 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.045 13:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.984 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.984 { 00:15:46.984 "cntlid": 95, 00:15:46.984 "qid": 0, 00:15:46.984 "state": "enabled", 00:15:46.984 "thread": "nvmf_tgt_poll_group_000", 00:15:46.984 "listen_address": { 00:15:46.984 "trtype": "TCP", 00:15:46.984 "adrfam": "IPv4", 00:15:46.984 "traddr": "10.0.0.2", 00:15:46.984 "trsvcid": "4420" 00:15:46.984 }, 00:15:46.984 "peer_address": { 00:15:46.984 "trtype": "TCP", 00:15:46.984 "adrfam": "IPv4", 00:15:46.984 "traddr": "10.0.0.1", 00:15:46.984 "trsvcid": "59736" 00:15:46.984 }, 00:15:46.984 "auth": { 00:15:46.984 "state": "completed", 00:15:46.984 "digest": "sha384", 00:15:46.984 "dhgroup": "ffdhe8192" 00:15:46.984 } 00:15:46.984 } 00:15:46.984 ]' 00:15:46.984 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.241 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.241 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.241 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.241 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.241 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.241 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.241 13:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.499 13:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:15:48.437 13:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.437 13:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.437 13:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.437 13:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.437 13:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.437 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:48.437 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.437 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.437 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.437 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.695 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.954 00:15:48.954 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.954 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.954 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.212 { 00:15:49.212 "cntlid": 97, 00:15:49.212 "qid": 0, 00:15:49.212 "state": "enabled", 00:15:49.212 "thread": "nvmf_tgt_poll_group_000", 00:15:49.212 "listen_address": { 00:15:49.212 "trtype": "TCP", 00:15:49.212 "adrfam": "IPv4", 00:15:49.212 "traddr": "10.0.0.2", 00:15:49.212 "trsvcid": "4420" 00:15:49.212 }, 00:15:49.212 "peer_address": { 00:15:49.212 "trtype": "TCP", 00:15:49.212 "adrfam": "IPv4", 00:15:49.212 "traddr": "10.0.0.1", 00:15:49.212 "trsvcid": "59756" 00:15:49.212 }, 00:15:49.212 "auth": { 00:15:49.212 "state": "completed", 00:15:49.212 "digest": "sha512", 00:15:49.212 "dhgroup": "null" 00:15:49.212 } 00:15:49.212 } 00:15:49.212 ]' 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.212 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.472 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.472 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.472 13:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.733 13:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.670 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.929 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.188 00:15:51.188 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.188 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.188 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.446 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.446 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.446 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.446 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.446 13:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.446 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.446 { 00:15:51.446 "cntlid": 99, 00:15:51.446 "qid": 0, 00:15:51.446 "state": "enabled", 00:15:51.446 "thread": "nvmf_tgt_poll_group_000", 00:15:51.446 "listen_address": { 00:15:51.446 "trtype": "TCP", 00:15:51.446 "adrfam": "IPv4", 00:15:51.446 "traddr": "10.0.0.2", 00:15:51.446 "trsvcid": "4420" 00:15:51.446 }, 00:15:51.446 "peer_address": { 00:15:51.446 "trtype": "TCP", 00:15:51.446 "adrfam": "IPv4", 00:15:51.446 "traddr": "10.0.0.1", 00:15:51.446 "trsvcid": "58306" 00:15:51.446 }, 00:15:51.446 "auth": { 00:15:51.446 "state": "completed", 00:15:51.446 "digest": "sha512", 00:15:51.446 "dhgroup": "null" 00:15:51.446 } 00:15:51.446 } 00:15:51.446 ]' 00:15:51.446 13:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.446 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.446 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.446 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:51.446 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.446 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.446 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.446 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.707 13:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.641 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.898 13:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.155 13:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.155 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.155 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.412 00:15:53.412 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.412 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.412 13:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.667 { 00:15:53.667 "cntlid": 101, 00:15:53.667 "qid": 0, 00:15:53.667 "state": "enabled", 00:15:53.667 "thread": "nvmf_tgt_poll_group_000", 00:15:53.667 "listen_address": { 00:15:53.667 "trtype": "TCP", 00:15:53.667 "adrfam": "IPv4", 00:15:53.667 "traddr": "10.0.0.2", 00:15:53.667 "trsvcid": "4420" 00:15:53.667 }, 00:15:53.667 "peer_address": { 00:15:53.667 "trtype": "TCP", 00:15:53.667 "adrfam": "IPv4", 00:15:53.667 "traddr": "10.0.0.1", 00:15:53.667 "trsvcid": "58334" 00:15:53.667 }, 00:15:53.667 "auth": { 00:15:53.667 "state": "completed", 00:15:53.667 "digest": "sha512", 00:15:53.667 "dhgroup": "null" 00:15:53.667 } 00:15:53.667 } 00:15:53.667 ]' 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.667 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.925 13:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.891 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.147 13:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.715 00:15:55.715 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.715 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.715 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.715 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.715 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.715 13:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.715 13:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.972 13:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.972 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.972 { 00:15:55.972 "cntlid": 103, 00:15:55.972 "qid": 0, 00:15:55.972 "state": "enabled", 00:15:55.972 "thread": "nvmf_tgt_poll_group_000", 00:15:55.972 "listen_address": { 00:15:55.972 "trtype": "TCP", 00:15:55.972 "adrfam": "IPv4", 00:15:55.972 "traddr": "10.0.0.2", 00:15:55.972 "trsvcid": "4420" 00:15:55.972 }, 00:15:55.972 "peer_address": { 00:15:55.972 "trtype": "TCP", 00:15:55.973 "adrfam": "IPv4", 00:15:55.973 "traddr": "10.0.0.1", 00:15:55.973 "trsvcid": "58368" 00:15:55.973 }, 00:15:55.973 "auth": { 00:15:55.973 "state": "completed", 00:15:55.973 "digest": "sha512", 00:15:55.973 "dhgroup": "null" 00:15:55.973 } 00:15:55.973 } 00:15:55.973 ]' 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.973 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.230 13:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.161 13:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.419 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.986 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.986 { 00:15:57.986 "cntlid": 105, 00:15:57.986 "qid": 0, 00:15:57.986 "state": "enabled", 00:15:57.986 "thread": "nvmf_tgt_poll_group_000", 00:15:57.986 "listen_address": { 00:15:57.986 "trtype": "TCP", 00:15:57.986 "adrfam": "IPv4", 00:15:57.986 "traddr": "10.0.0.2", 00:15:57.986 "trsvcid": "4420" 00:15:57.986 }, 00:15:57.986 "peer_address": { 00:15:57.986 "trtype": "TCP", 00:15:57.986 "adrfam": "IPv4", 00:15:57.986 "traddr": "10.0.0.1", 00:15:57.986 "trsvcid": "58394" 00:15:57.986 }, 00:15:57.986 "auth": { 00:15:57.986 "state": "completed", 00:15:57.986 "digest": "sha512", 00:15:57.986 "dhgroup": "ffdhe2048" 00:15:57.986 } 00:15:57.986 } 00:15:57.986 ]' 00:15:57.986 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.243 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.243 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.243 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.243 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.243 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.243 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.243 13:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.501 13:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.435 13:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.694 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.952 00:15:59.952 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.952 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.952 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.209 { 00:16:00.209 "cntlid": 107, 00:16:00.209 "qid": 0, 00:16:00.209 "state": "enabled", 00:16:00.209 "thread": "nvmf_tgt_poll_group_000", 00:16:00.209 "listen_address": { 00:16:00.209 "trtype": "TCP", 00:16:00.209 "adrfam": "IPv4", 00:16:00.209 "traddr": "10.0.0.2", 00:16:00.209 "trsvcid": "4420" 00:16:00.209 }, 00:16:00.209 "peer_address": { 00:16:00.209 "trtype": "TCP", 00:16:00.209 "adrfam": "IPv4", 00:16:00.209 "traddr": "10.0.0.1", 00:16:00.209 "trsvcid": "56120" 00:16:00.209 }, 00:16:00.209 "auth": { 00:16:00.209 "state": "completed", 00:16:00.209 "digest": "sha512", 00:16:00.209 "dhgroup": "ffdhe2048" 00:16:00.209 } 00:16:00.209 } 00:16:00.209 ]' 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.209 13:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.468 13:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.427 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.685 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.258 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.258 { 00:16:02.258 "cntlid": 109, 00:16:02.258 "qid": 0, 00:16:02.258 "state": "enabled", 00:16:02.258 "thread": "nvmf_tgt_poll_group_000", 00:16:02.258 "listen_address": { 00:16:02.258 "trtype": "TCP", 00:16:02.258 "adrfam": "IPv4", 00:16:02.258 "traddr": "10.0.0.2", 00:16:02.258 "trsvcid": "4420" 00:16:02.258 }, 00:16:02.258 "peer_address": { 00:16:02.258 "trtype": "TCP", 00:16:02.258 "adrfam": "IPv4", 00:16:02.258 "traddr": "10.0.0.1", 00:16:02.258 "trsvcid": "56154" 00:16:02.258 }, 00:16:02.258 "auth": { 00:16:02.258 "state": "completed", 00:16:02.258 "digest": "sha512", 00:16:02.258 "dhgroup": "ffdhe2048" 00:16:02.258 } 00:16:02.258 } 00:16:02.258 ]' 00:16:02.258 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.533 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.533 13:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.533 13:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.533 13:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.533 13:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.533 13:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.533 13:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.790 13:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.727 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.986 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.244 00:16:04.244 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.244 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.244 13:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.502 { 00:16:04.502 "cntlid": 111, 00:16:04.502 "qid": 0, 00:16:04.502 "state": "enabled", 00:16:04.502 "thread": "nvmf_tgt_poll_group_000", 00:16:04.502 "listen_address": { 00:16:04.502 "trtype": "TCP", 00:16:04.502 "adrfam": "IPv4", 00:16:04.502 "traddr": "10.0.0.2", 00:16:04.502 "trsvcid": "4420" 00:16:04.502 }, 00:16:04.502 "peer_address": { 00:16:04.502 "trtype": "TCP", 00:16:04.502 "adrfam": "IPv4", 00:16:04.502 "traddr": "10.0.0.1", 00:16:04.502 "trsvcid": "56190" 00:16:04.502 }, 00:16:04.502 "auth": { 00:16:04.502 "state": "completed", 00:16:04.502 "digest": "sha512", 00:16:04.502 "dhgroup": "ffdhe2048" 00:16:04.502 } 00:16:04.502 } 00:16:04.502 ]' 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.502 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.760 13:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:16:05.697 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.955 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.213 13:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.472 00:16:06.472 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.472 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.472 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.731 { 00:16:06.731 "cntlid": 113, 00:16:06.731 "qid": 0, 00:16:06.731 "state": "enabled", 00:16:06.731 "thread": "nvmf_tgt_poll_group_000", 00:16:06.731 "listen_address": { 00:16:06.731 "trtype": "TCP", 00:16:06.731 "adrfam": "IPv4", 00:16:06.731 "traddr": "10.0.0.2", 00:16:06.731 "trsvcid": "4420" 00:16:06.731 }, 00:16:06.731 "peer_address": { 00:16:06.731 "trtype": "TCP", 00:16:06.731 "adrfam": "IPv4", 00:16:06.731 "traddr": "10.0.0.1", 00:16:06.731 "trsvcid": "56220" 00:16:06.731 }, 00:16:06.731 "auth": { 00:16:06.731 "state": "completed", 00:16:06.731 "digest": "sha512", 00:16:06.731 "dhgroup": "ffdhe3072" 00:16:06.731 } 00:16:06.731 } 00:16:06.731 ]' 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.731 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.990 13:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.948 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.206 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.207 13:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.777 00:16:08.777 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.777 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.777 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.036 { 00:16:09.036 "cntlid": 115, 00:16:09.036 "qid": 0, 00:16:09.036 "state": "enabled", 00:16:09.036 "thread": "nvmf_tgt_poll_group_000", 00:16:09.036 "listen_address": { 00:16:09.036 "trtype": "TCP", 00:16:09.036 "adrfam": "IPv4", 00:16:09.036 "traddr": "10.0.0.2", 00:16:09.036 "trsvcid": "4420" 00:16:09.036 }, 00:16:09.036 "peer_address": { 00:16:09.036 "trtype": "TCP", 00:16:09.036 "adrfam": "IPv4", 00:16:09.036 "traddr": "10.0.0.1", 00:16:09.036 "trsvcid": "56252" 00:16:09.036 }, 00:16:09.036 "auth": { 00:16:09.036 "state": "completed", 00:16:09.036 "digest": "sha512", 00:16:09.036 "dhgroup": "ffdhe3072" 00:16:09.036 } 00:16:09.036 } 00:16:09.036 ]' 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.036 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.294 13:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.232 13:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.526 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.092 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.092 { 00:16:11.092 "cntlid": 117, 00:16:11.092 "qid": 0, 00:16:11.092 "state": "enabled", 00:16:11.092 "thread": "nvmf_tgt_poll_group_000", 00:16:11.092 "listen_address": { 00:16:11.092 "trtype": "TCP", 00:16:11.092 "adrfam": "IPv4", 00:16:11.092 "traddr": "10.0.0.2", 00:16:11.092 "trsvcid": "4420" 00:16:11.092 }, 00:16:11.092 "peer_address": { 00:16:11.092 "trtype": "TCP", 00:16:11.092 "adrfam": "IPv4", 00:16:11.092 "traddr": "10.0.0.1", 00:16:11.092 "trsvcid": "52248" 00:16:11.092 }, 00:16:11.092 "auth": { 00:16:11.092 "state": "completed", 00:16:11.092 "digest": "sha512", 00:16:11.092 "dhgroup": "ffdhe3072" 00:16:11.092 } 00:16:11.092 } 00:16:11.092 ]' 00:16:11.092 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.350 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.350 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.350 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.350 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.350 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.350 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.350 13:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.608 13:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.543 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.801 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.059 00:16:13.059 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.059 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.059 13:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.319 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.319 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.319 13:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.319 13:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.577 { 00:16:13.577 "cntlid": 119, 00:16:13.577 "qid": 0, 00:16:13.577 "state": "enabled", 00:16:13.577 "thread": "nvmf_tgt_poll_group_000", 00:16:13.577 "listen_address": { 00:16:13.577 "trtype": "TCP", 00:16:13.577 "adrfam": "IPv4", 00:16:13.577 "traddr": "10.0.0.2", 00:16:13.577 "trsvcid": "4420" 00:16:13.577 }, 00:16:13.577 "peer_address": { 00:16:13.577 "trtype": "TCP", 00:16:13.577 "adrfam": "IPv4", 00:16:13.577 "traddr": "10.0.0.1", 00:16:13.577 "trsvcid": "52280" 00:16:13.577 }, 00:16:13.577 "auth": { 00:16:13.577 "state": "completed", 00:16:13.577 "digest": "sha512", 00:16:13.577 "dhgroup": "ffdhe3072" 00:16:13.577 } 00:16:13.577 } 00:16:13.577 ]' 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.577 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.834 13:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.766 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.023 13:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.585 00:16:15.586 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.586 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.586 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.843 { 00:16:15.843 "cntlid": 121, 00:16:15.843 "qid": 0, 00:16:15.843 "state": "enabled", 00:16:15.843 "thread": "nvmf_tgt_poll_group_000", 00:16:15.843 "listen_address": { 00:16:15.843 "trtype": "TCP", 00:16:15.843 "adrfam": "IPv4", 00:16:15.843 "traddr": "10.0.0.2", 00:16:15.843 "trsvcid": "4420" 00:16:15.843 }, 00:16:15.843 "peer_address": { 00:16:15.843 "trtype": "TCP", 00:16:15.843 "adrfam": "IPv4", 00:16:15.843 "traddr": "10.0.0.1", 00:16:15.843 "trsvcid": "52308" 00:16:15.843 }, 00:16:15.843 "auth": { 00:16:15.843 "state": "completed", 00:16:15.843 "digest": "sha512", 00:16:15.843 "dhgroup": "ffdhe4096" 00:16:15.843 } 00:16:15.843 } 00:16:15.843 ]' 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.843 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.100 13:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.034 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.292 13:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.860 00:16:17.860 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.860 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.860 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.118 { 00:16:18.118 "cntlid": 123, 00:16:18.118 "qid": 0, 00:16:18.118 "state": "enabled", 00:16:18.118 "thread": "nvmf_tgt_poll_group_000", 00:16:18.118 "listen_address": { 00:16:18.118 "trtype": "TCP", 00:16:18.118 "adrfam": "IPv4", 00:16:18.118 "traddr": "10.0.0.2", 00:16:18.118 "trsvcid": "4420" 00:16:18.118 }, 00:16:18.118 "peer_address": { 00:16:18.118 "trtype": "TCP", 00:16:18.118 "adrfam": "IPv4", 00:16:18.118 "traddr": "10.0.0.1", 00:16:18.118 "trsvcid": "52348" 00:16:18.118 }, 00:16:18.118 "auth": { 00:16:18.118 "state": "completed", 00:16:18.118 "digest": "sha512", 00:16:18.118 "dhgroup": "ffdhe4096" 00:16:18.118 } 00:16:18.118 } 00:16:18.118 ]' 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.118 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.377 13:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.311 13:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.881 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.140 00:16:20.140 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.140 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.140 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.399 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.399 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.399 13:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.399 13:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.399 13:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.399 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.399 { 00:16:20.399 "cntlid": 125, 00:16:20.399 "qid": 0, 00:16:20.399 "state": "enabled", 00:16:20.399 "thread": "nvmf_tgt_poll_group_000", 00:16:20.399 "listen_address": { 00:16:20.399 "trtype": "TCP", 00:16:20.399 "adrfam": "IPv4", 00:16:20.399 "traddr": "10.0.0.2", 00:16:20.399 "trsvcid": "4420" 00:16:20.399 }, 00:16:20.399 "peer_address": { 00:16:20.399 "trtype": "TCP", 00:16:20.399 "adrfam": "IPv4", 00:16:20.399 "traddr": "10.0.0.1", 00:16:20.399 "trsvcid": "37764" 00:16:20.399 }, 00:16:20.399 "auth": { 00:16:20.399 "state": "completed", 00:16:20.399 "digest": "sha512", 00:16:20.399 "dhgroup": "ffdhe4096" 00:16:20.399 } 00:16:20.399 } 00:16:20.399 ]' 00:16:20.399 13:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.399 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.399 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.399 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.399 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.658 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.658 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.658 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.916 13:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.852 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.109 13:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.677 00:16:22.677 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.677 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.677 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.936 { 00:16:22.936 "cntlid": 127, 00:16:22.936 "qid": 0, 00:16:22.936 "state": "enabled", 00:16:22.936 "thread": "nvmf_tgt_poll_group_000", 00:16:22.936 "listen_address": { 00:16:22.936 "trtype": "TCP", 00:16:22.936 "adrfam": "IPv4", 00:16:22.936 "traddr": "10.0.0.2", 00:16:22.936 "trsvcid": "4420" 00:16:22.936 }, 00:16:22.936 "peer_address": { 00:16:22.936 "trtype": "TCP", 00:16:22.936 "adrfam": "IPv4", 00:16:22.936 "traddr": "10.0.0.1", 00:16:22.936 "trsvcid": "37778" 00:16:22.936 }, 00:16:22.936 "auth": { 00:16:22.936 "state": "completed", 00:16:22.936 "digest": "sha512", 00:16:22.936 "dhgroup": "ffdhe4096" 00:16:22.936 } 00:16:22.936 } 00:16:22.936 ]' 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.936 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.195 13:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.133 13:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.391 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.957 00:16:24.957 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.957 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.957 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.216 { 00:16:25.216 "cntlid": 129, 00:16:25.216 "qid": 0, 00:16:25.216 "state": "enabled", 00:16:25.216 "thread": "nvmf_tgt_poll_group_000", 00:16:25.216 "listen_address": { 00:16:25.216 "trtype": "TCP", 00:16:25.216 "adrfam": "IPv4", 00:16:25.216 "traddr": "10.0.0.2", 00:16:25.216 "trsvcid": "4420" 00:16:25.216 }, 00:16:25.216 "peer_address": { 00:16:25.216 "trtype": "TCP", 00:16:25.216 "adrfam": "IPv4", 00:16:25.216 "traddr": "10.0.0.1", 00:16:25.216 "trsvcid": "37816" 00:16:25.216 }, 00:16:25.216 "auth": { 00:16:25.216 "state": "completed", 00:16:25.216 "digest": "sha512", 00:16:25.216 "dhgroup": "ffdhe6144" 00:16:25.216 } 00:16:25.216 } 00:16:25.216 ]' 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.216 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.474 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.474 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.474 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.474 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.474 13:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.732 13:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.670 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.928 13:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.496 00:16:27.496 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.496 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.496 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.754 { 00:16:27.754 "cntlid": 131, 00:16:27.754 "qid": 0, 00:16:27.754 "state": "enabled", 00:16:27.754 "thread": "nvmf_tgt_poll_group_000", 00:16:27.754 "listen_address": { 00:16:27.754 "trtype": "TCP", 00:16:27.754 "adrfam": "IPv4", 00:16:27.754 "traddr": "10.0.0.2", 00:16:27.754 "trsvcid": "4420" 00:16:27.754 }, 00:16:27.754 "peer_address": { 00:16:27.754 "trtype": "TCP", 00:16:27.754 "adrfam": "IPv4", 00:16:27.754 "traddr": "10.0.0.1", 00:16:27.754 "trsvcid": "37836" 00:16:27.754 }, 00:16:27.754 "auth": { 00:16:27.754 "state": "completed", 00:16:27.754 "digest": "sha512", 00:16:27.754 "dhgroup": "ffdhe6144" 00:16:27.754 } 00:16:27.754 } 00:16:27.754 ]' 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.754 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.012 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.012 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.012 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.012 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.012 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.270 13:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.206 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.470 13:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.470 13:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.470 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.470 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.041 00:16:30.041 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.041 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.041 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.340 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.340 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.340 13:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.340 13:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.340 13:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.340 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.340 { 00:16:30.340 "cntlid": 133, 00:16:30.340 "qid": 0, 00:16:30.340 "state": "enabled", 00:16:30.340 "thread": "nvmf_tgt_poll_group_000", 00:16:30.340 "listen_address": { 00:16:30.340 "trtype": "TCP", 00:16:30.340 "adrfam": "IPv4", 00:16:30.340 "traddr": "10.0.0.2", 00:16:30.340 "trsvcid": "4420" 00:16:30.340 }, 00:16:30.340 "peer_address": { 00:16:30.340 "trtype": "TCP", 00:16:30.340 "adrfam": "IPv4", 00:16:30.341 "traddr": "10.0.0.1", 00:16:30.341 "trsvcid": "55574" 00:16:30.341 }, 00:16:30.341 "auth": { 00:16:30.341 "state": "completed", 00:16:30.341 "digest": "sha512", 00:16:30.341 "dhgroup": "ffdhe6144" 00:16:30.341 } 00:16:30.341 } 00:16:30.341 ]' 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.341 13:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.622 13:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.557 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.814 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.378 00:16:32.378 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.378 13:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.378 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.635 { 00:16:32.635 "cntlid": 135, 00:16:32.635 "qid": 0, 00:16:32.635 "state": "enabled", 00:16:32.635 "thread": "nvmf_tgt_poll_group_000", 00:16:32.635 "listen_address": { 00:16:32.635 "trtype": "TCP", 00:16:32.635 "adrfam": "IPv4", 00:16:32.635 "traddr": "10.0.0.2", 00:16:32.635 "trsvcid": "4420" 00:16:32.635 }, 00:16:32.635 "peer_address": { 00:16:32.635 "trtype": "TCP", 00:16:32.635 "adrfam": "IPv4", 00:16:32.635 "traddr": "10.0.0.1", 00:16:32.635 "trsvcid": "55604" 00:16:32.635 }, 00:16:32.635 "auth": { 00:16:32.635 "state": "completed", 00:16:32.635 "digest": "sha512", 00:16:32.635 "dhgroup": "ffdhe6144" 00:16:32.635 } 00:16:32.635 } 00:16:32.635 ]' 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.635 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.892 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.892 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.892 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.892 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.892 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.149 13:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:16:34.077 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.077 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.077 13:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.078 13:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.078 13:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.078 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.078 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.078 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.078 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.335 13:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.267 00:16:35.267 13:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.267 13:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.267 13:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.525 13:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.525 13:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.525 13:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.525 13:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.525 13:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.525 13:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.525 { 00:16:35.525 "cntlid": 137, 00:16:35.525 "qid": 0, 00:16:35.525 "state": "enabled", 00:16:35.525 "thread": "nvmf_tgt_poll_group_000", 00:16:35.525 "listen_address": { 00:16:35.525 "trtype": "TCP", 00:16:35.525 "adrfam": "IPv4", 00:16:35.525 "traddr": "10.0.0.2", 00:16:35.525 "trsvcid": "4420" 00:16:35.525 }, 00:16:35.525 "peer_address": { 00:16:35.525 "trtype": "TCP", 00:16:35.525 "adrfam": "IPv4", 00:16:35.525 "traddr": "10.0.0.1", 00:16:35.525 "trsvcid": "55612" 00:16:35.525 }, 00:16:35.525 "auth": { 00:16:35.525 "state": "completed", 00:16:35.525 "digest": "sha512", 00:16:35.525 "dhgroup": "ffdhe8192" 00:16:35.525 } 00:16:35.525 } 00:16:35.525 ]' 00:16:35.525 13:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.525 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.525 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.525 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.525 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.525 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.525 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.525 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.782 13:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:16:36.713 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.978 13:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.907 00:16:37.907 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.907 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.907 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.164 { 00:16:38.164 "cntlid": 139, 00:16:38.164 "qid": 0, 00:16:38.164 "state": "enabled", 00:16:38.164 "thread": "nvmf_tgt_poll_group_000", 00:16:38.164 "listen_address": { 00:16:38.164 "trtype": "TCP", 00:16:38.164 "adrfam": "IPv4", 00:16:38.164 "traddr": "10.0.0.2", 00:16:38.164 "trsvcid": "4420" 00:16:38.164 }, 00:16:38.164 "peer_address": { 00:16:38.164 "trtype": "TCP", 00:16:38.164 "adrfam": "IPv4", 00:16:38.164 "traddr": "10.0.0.1", 00:16:38.164 "trsvcid": "55630" 00:16:38.164 }, 00:16:38.164 "auth": { 00:16:38.164 "state": "completed", 00:16:38.164 "digest": "sha512", 00:16:38.164 "dhgroup": "ffdhe8192" 00:16:38.164 } 00:16:38.164 } 00:16:38.164 ]' 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.164 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.422 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.422 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.422 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.422 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.422 13:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.678 13:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTA0NDY5MmUyNWVjYWI5ZDFmZWEwMWYyZGMxNTliYTmQnc3G: --dhchap-ctrl-secret DHHC-1:02:YmU1YzA4ZTM4M2NhMTNjMzFmZmNkODc4NTEzN2E5NzIyNmFmZGExMjcwZTBmYjBi+TspAw==: 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.609 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.866 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:39.866 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.866 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.866 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.867 13:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.799 00:16:40.799 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.799 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.799 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.057 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.057 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.057 13:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.057 13:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.057 13:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.057 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.057 { 00:16:41.057 "cntlid": 141, 00:16:41.057 "qid": 0, 00:16:41.057 "state": "enabled", 00:16:41.057 "thread": "nvmf_tgt_poll_group_000", 00:16:41.058 "listen_address": { 00:16:41.058 "trtype": "TCP", 00:16:41.058 "adrfam": "IPv4", 00:16:41.058 "traddr": "10.0.0.2", 00:16:41.058 "trsvcid": "4420" 00:16:41.058 }, 00:16:41.058 "peer_address": { 00:16:41.058 "trtype": "TCP", 00:16:41.058 "adrfam": "IPv4", 00:16:41.058 "traddr": "10.0.0.1", 00:16:41.058 "trsvcid": "56826" 00:16:41.058 }, 00:16:41.058 "auth": { 00:16:41.058 "state": "completed", 00:16:41.058 "digest": "sha512", 00:16:41.058 "dhgroup": "ffdhe8192" 00:16:41.058 } 00:16:41.058 } 00:16:41.058 ]' 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.058 13:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.315 13:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MmQxY2M2MjJlYzFkNmYwOWFmNGYzNjJhNDNkYmVkYWI5OWY1OGU1MDIwYjRlZTA2L0rwyg==: --dhchap-ctrl-secret DHHC-1:01:N2I0YjZhZWEzYjk4NWMxZGY3OTFkNmJiM2I0M2QwYzP6Ap6E: 00:16:42.247 13:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.505 13:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.505 13:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.505 13:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.505 13:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.505 13:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.505 13:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.505 13:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.763 13:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.723 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.723 { 00:16:43.723 "cntlid": 143, 00:16:43.723 "qid": 0, 00:16:43.723 "state": "enabled", 00:16:43.723 "thread": "nvmf_tgt_poll_group_000", 00:16:43.723 "listen_address": { 00:16:43.723 "trtype": "TCP", 00:16:43.723 "adrfam": "IPv4", 00:16:43.723 "traddr": "10.0.0.2", 00:16:43.723 "trsvcid": "4420" 00:16:43.723 }, 00:16:43.723 "peer_address": { 00:16:43.723 "trtype": "TCP", 00:16:43.723 "adrfam": "IPv4", 00:16:43.723 "traddr": "10.0.0.1", 00:16:43.723 "trsvcid": "56844" 00:16:43.723 }, 00:16:43.723 "auth": { 00:16:43.723 "state": "completed", 00:16:43.723 "digest": "sha512", 00:16:43.723 "dhgroup": "ffdhe8192" 00:16:43.723 } 00:16:43.723 } 00:16:43.723 ]' 00:16:43.723 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.991 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.991 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.991 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.991 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.991 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.991 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.991 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.249 13:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:45.183 13:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.441 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.376 00:16:46.376 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.376 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.376 13:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.634 { 00:16:46.634 "cntlid": 145, 00:16:46.634 "qid": 0, 00:16:46.634 "state": "enabled", 00:16:46.634 "thread": "nvmf_tgt_poll_group_000", 00:16:46.634 "listen_address": { 00:16:46.634 "trtype": "TCP", 00:16:46.634 "adrfam": "IPv4", 00:16:46.634 "traddr": "10.0.0.2", 00:16:46.634 "trsvcid": "4420" 00:16:46.634 }, 00:16:46.634 "peer_address": { 00:16:46.634 "trtype": "TCP", 00:16:46.634 "adrfam": "IPv4", 00:16:46.634 "traddr": "10.0.0.1", 00:16:46.634 "trsvcid": "56872" 00:16:46.634 }, 00:16:46.634 "auth": { 00:16:46.634 "state": "completed", 00:16:46.634 "digest": "sha512", 00:16:46.634 "dhgroup": "ffdhe8192" 00:16:46.634 } 00:16:46.634 } 00:16:46.634 ]' 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.634 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.635 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.635 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.635 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.635 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.635 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.635 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.893 13:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YzUwYWZjOTUyZDlkZTRmNTJmZjE1NjVjNTRhNjE4OTllNjk0M2I1MmI0YjdhNjY0BKxi8A==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDg0OWM4OTYyNDBmZWRlY2I4ZWE2Njg3YWE5NDNjODM3Y2RlNTU3N2MzYWYzNmY1M2MwMTBjMThkYTM0MyR22yM=: 00:16:47.826 13:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.084 13:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:49.018 request: 00:16:49.018 { 00:16:49.018 "name": "nvme0", 00:16:49.018 "trtype": "tcp", 00:16:49.018 "traddr": "10.0.0.2", 00:16:49.018 "adrfam": "ipv4", 00:16:49.018 "trsvcid": "4420", 00:16:49.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.018 "prchk_reftag": false, 00:16:49.018 "prchk_guard": false, 00:16:49.018 "hdgst": false, 00:16:49.018 "ddgst": false, 00:16:49.018 "dhchap_key": "key2", 00:16:49.018 "method": "bdev_nvme_attach_controller", 00:16:49.018 "req_id": 1 00:16:49.018 } 00:16:49.018 Got JSON-RPC error response 00:16:49.018 response: 00:16:49.018 { 00:16:49.018 "code": -5, 00:16:49.018 "message": "Input/output error" 00:16:49.018 } 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.018 13:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.586 request: 00:16:49.586 { 00:16:49.586 "name": "nvme0", 00:16:49.586 "trtype": "tcp", 00:16:49.586 "traddr": "10.0.0.2", 00:16:49.586 "adrfam": "ipv4", 00:16:49.586 "trsvcid": "4420", 00:16:49.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.586 "prchk_reftag": false, 00:16:49.586 "prchk_guard": false, 00:16:49.586 "hdgst": false, 00:16:49.586 "ddgst": false, 00:16:49.586 "dhchap_key": "key1", 00:16:49.586 "dhchap_ctrlr_key": "ckey2", 00:16:49.586 "method": "bdev_nvme_attach_controller", 00:16:49.586 "req_id": 1 00:16:49.586 } 00:16:49.586 Got JSON-RPC error response 00:16:49.586 response: 00:16:49.586 { 00:16:49.586 "code": -5, 00:16:49.586 "message": "Input/output error" 00:16:49.586 } 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.586 13:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.519 request: 00:16:50.519 { 00:16:50.519 "name": "nvme0", 00:16:50.519 "trtype": "tcp", 00:16:50.519 "traddr": "10.0.0.2", 00:16:50.519 "adrfam": "ipv4", 00:16:50.519 "trsvcid": "4420", 00:16:50.519 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.519 "prchk_reftag": false, 00:16:50.519 "prchk_guard": false, 00:16:50.519 "hdgst": false, 00:16:50.519 "ddgst": false, 00:16:50.519 "dhchap_key": "key1", 00:16:50.519 "dhchap_ctrlr_key": "ckey1", 00:16:50.519 "method": "bdev_nvme_attach_controller", 00:16:50.519 "req_id": 1 00:16:50.519 } 00:16:50.519 Got JSON-RPC error response 00:16:50.519 response: 00:16:50.519 { 00:16:50.519 "code": -5, 00:16:50.519 "message": "Input/output error" 00:16:50.519 } 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3821789 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3821789 ']' 00:16:50.519 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3821789 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3821789 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3821789' 00:16:50.520 killing process with pid 3821789 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3821789 00:16:50.520 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3821789 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3844364 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3844364 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3844364 ']' 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.778 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3844364 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3844364 ']' 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.036 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.295 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.295 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:51.295 13:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:51.295 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.295 13:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.553 13:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.485 00:16:52.485 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.485 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.485 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.743 { 00:16:52.743 "cntlid": 1, 00:16:52.743 "qid": 0, 00:16:52.743 "state": "enabled", 00:16:52.743 "thread": "nvmf_tgt_poll_group_000", 00:16:52.743 "listen_address": { 00:16:52.743 "trtype": "TCP", 00:16:52.743 "adrfam": "IPv4", 00:16:52.743 "traddr": "10.0.0.2", 00:16:52.743 "trsvcid": "4420" 00:16:52.743 }, 00:16:52.743 "peer_address": { 00:16:52.743 "trtype": "TCP", 00:16:52.743 "adrfam": "IPv4", 00:16:52.743 "traddr": "10.0.0.1", 00:16:52.743 "trsvcid": "51306" 00:16:52.743 }, 00:16:52.743 "auth": { 00:16:52.743 "state": "completed", 00:16:52.743 "digest": "sha512", 00:16:52.743 "dhgroup": "ffdhe8192" 00:16:52.743 } 00:16:52.743 } 00:16:52.743 ]' 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.743 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.001 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.001 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.001 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.001 13:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODMzMTNlNTlhOTIzN2VmMTczYTZhNjZhZTJhYmZmYzFjMDA2YTI0NTFjZDk2M2MwNjQwNzkwMTJiNGFhYWZiNkU5B1Y=: 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.375 13:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.632 request: 00:16:54.632 { 00:16:54.632 "name": "nvme0", 00:16:54.632 "trtype": "tcp", 00:16:54.632 "traddr": "10.0.0.2", 00:16:54.632 "adrfam": "ipv4", 00:16:54.632 "trsvcid": "4420", 00:16:54.632 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:54.632 "prchk_reftag": false, 00:16:54.632 "prchk_guard": false, 00:16:54.632 "hdgst": false, 00:16:54.632 "ddgst": false, 00:16:54.632 "dhchap_key": "key3", 00:16:54.632 "method": "bdev_nvme_attach_controller", 00:16:54.632 "req_id": 1 00:16:54.632 } 00:16:54.632 Got JSON-RPC error response 00:16:54.632 response: 00:16:54.632 { 00:16:54.632 "code": -5, 00:16:54.632 "message": "Input/output error" 00:16:54.632 } 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:54.632 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.890 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.148 request: 00:16:55.148 { 00:16:55.148 "name": "nvme0", 00:16:55.148 "trtype": "tcp", 00:16:55.148 "traddr": "10.0.0.2", 00:16:55.148 "adrfam": "ipv4", 00:16:55.148 "trsvcid": "4420", 00:16:55.148 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.148 "prchk_reftag": false, 00:16:55.148 "prchk_guard": false, 00:16:55.148 "hdgst": false, 00:16:55.148 "ddgst": false, 00:16:55.148 "dhchap_key": "key3", 00:16:55.148 "method": "bdev_nvme_attach_controller", 00:16:55.148 "req_id": 1 00:16:55.148 } 00:16:55.148 Got JSON-RPC error response 00:16:55.148 response: 00:16:55.148 { 00:16:55.148 "code": -5, 00:16:55.148 "message": "Input/output error" 00:16:55.148 } 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.148 13:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.715 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.715 request: 00:16:55.715 { 00:16:55.715 "name": "nvme0", 00:16:55.715 "trtype": "tcp", 00:16:55.715 "traddr": "10.0.0.2", 00:16:55.715 "adrfam": "ipv4", 00:16:55.715 "trsvcid": "4420", 00:16:55.715 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.715 "prchk_reftag": false, 00:16:55.715 "prchk_guard": false, 00:16:55.715 "hdgst": false, 00:16:55.715 "ddgst": false, 00:16:55.715 "dhchap_key": "key0", 00:16:55.715 "dhchap_ctrlr_key": "key1", 00:16:55.715 "method": "bdev_nvme_attach_controller", 00:16:55.715 "req_id": 1 00:16:55.715 } 00:16:55.715 Got JSON-RPC error response 00:16:55.715 response: 00:16:55.715 { 00:16:55.715 "code": -5, 00:16:55.715 "message": "Input/output error" 00:16:55.715 } 00:16:55.974 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:55.974 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:55.974 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:55.974 13:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:55.974 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:55.974 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:56.232 00:16:56.232 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:56.232 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:56.232 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.489 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.489 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.489 13:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3821816 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3821816 ']' 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3821816 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3821816 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3821816' 00:16:56.747 killing process with pid 3821816 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3821816 00:16:56.747 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3821816 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.314 rmmod nvme_tcp 00:16:57.314 rmmod nvme_fabrics 00:16:57.314 rmmod nvme_keyring 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3844364 ']' 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3844364 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3844364 ']' 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3844364 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3844364 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3844364' 00:16:57.314 killing process with pid 3844364 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3844364 00:16:57.314 13:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3844364 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.573 13:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.477 13:09:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:59.736 13:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.n0x /tmp/spdk.key-sha256.jLx /tmp/spdk.key-sha384.LLA /tmp/spdk.key-sha512.50P /tmp/spdk.key-sha512.tts /tmp/spdk.key-sha384.lci /tmp/spdk.key-sha256.WxU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:59.736 00:16:59.736 real 3m10.632s 00:16:59.736 user 7m24.335s 00:16:59.736 sys 0m25.117s 00:16:59.736 13:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:59.736 13:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.736 ************************************ 00:16:59.736 END TEST nvmf_auth_target 00:16:59.736 ************************************ 00:16:59.736 13:09:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:59.736 13:09:21 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:59.736 13:09:21 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:59.736 13:09:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:59.736 13:09:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.736 13:09:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:59.736 ************************************ 00:16:59.736 START TEST nvmf_bdevio_no_huge 00:16:59.736 ************************************ 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:59.736 * Looking for test storage... 00:16:59.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.736 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.737 13:09:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:17:01.688 00:17:01.688 --- 10.0.0.2 ping statistics --- 00:17:01.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.688 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:17:01.688 00:17:01.688 --- 10.0.0.1 ping statistics --- 00:17:01.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.688 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:01.688 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3847139 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3847139 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3847139 ']' 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.689 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:01.948 [2024-07-15 13:09:23.427604] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:01.948 [2024-07-15 13:09:23.427690] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:01.948 [2024-07-15 13:09:23.499438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.948 [2024-07-15 13:09:23.625546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.948 [2024-07-15 13:09:23.625606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.948 [2024-07-15 13:09:23.625622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.948 [2024-07-15 13:09:23.625635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.948 [2024-07-15 13:09:23.625647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.948 [2024-07-15 13:09:23.625743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:01.948 [2024-07-15 13:09:23.625798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:01.948 [2024-07-15 13:09:23.625853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:01.948 [2024-07-15 13:09:23.625856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 [2024-07-15 13:09:23.758865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 Malloc0 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 [2024-07-15 13:09:23.797194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.205 { 00:17:02.205 "params": { 00:17:02.205 "name": "Nvme$subsystem", 00:17:02.205 "trtype": "$TEST_TRANSPORT", 00:17:02.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.205 "adrfam": "ipv4", 00:17:02.205 "trsvcid": "$NVMF_PORT", 00:17:02.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.205 "hdgst": ${hdgst:-false}, 00:17:02.205 "ddgst": ${ddgst:-false} 00:17:02.205 }, 00:17:02.205 "method": "bdev_nvme_attach_controller" 00:17:02.205 } 00:17:02.205 EOF 00:17:02.205 )") 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:02.205 13:09:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.205 "params": { 00:17:02.205 "name": "Nvme1", 00:17:02.205 "trtype": "tcp", 00:17:02.205 "traddr": "10.0.0.2", 00:17:02.205 "adrfam": "ipv4", 00:17:02.205 "trsvcid": "4420", 00:17:02.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.205 "hdgst": false, 00:17:02.205 "ddgst": false 00:17:02.205 }, 00:17:02.205 "method": "bdev_nvme_attach_controller" 00:17:02.205 }' 00:17:02.205 [2024-07-15 13:09:23.845326] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:02.205 [2024-07-15 13:09:23.845396] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3847163 ] 00:17:02.461 [2024-07-15 13:09:23.911885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.461 [2024-07-15 13:09:24.026600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.461 [2024-07-15 13:09:24.026648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.461 [2024-07-15 13:09:24.026652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.718 I/O targets: 00:17:02.718 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:02.718 00:17:02.718 00:17:02.718 CUnit - A unit testing framework for C - Version 2.1-3 00:17:02.718 http://cunit.sourceforge.net/ 00:17:02.718 00:17:02.718 00:17:02.718 Suite: bdevio tests on: Nvme1n1 00:17:02.718 Test: blockdev write read block ...passed 00:17:02.718 Test: blockdev write zeroes read block ...passed 00:17:02.718 Test: blockdev write zeroes read no split ...passed 00:17:02.975 Test: blockdev write zeroes read split ...passed 00:17:02.975 Test: blockdev write zeroes read split partial ...passed 00:17:02.975 Test: blockdev reset ...[2024-07-15 13:09:24.512382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:02.975 [2024-07-15 13:09:24.512489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc9fb0 (9): Bad file descriptor 00:17:02.975 [2024-07-15 13:09:24.662212] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:02.975 passed 00:17:03.231 Test: blockdev write read 8 blocks ...passed 00:17:03.231 Test: blockdev write read size > 128k ...passed 00:17:03.231 Test: blockdev write read invalid size ...passed 00:17:03.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.231 Test: blockdev write read max offset ...passed 00:17:03.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.231 Test: blockdev writev readv 8 blocks ...passed 00:17:03.231 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.231 Test: blockdev writev readv block ...passed 00:17:03.231 Test: blockdev writev readv size > 128k ...passed 00:17:03.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.231 Test: blockdev comparev and writev ...[2024-07-15 13:09:24.922197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.922233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.231 [2024-07-15 13:09:24.922257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.922274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:03.231 [2024-07-15 13:09:24.922682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.922706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:03.231 [2024-07-15 13:09:24.922727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.922742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:03.231 [2024-07-15 13:09:24.923144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.923167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:03.231 [2024-07-15 13:09:24.923188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.923205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:03.231 [2024-07-15 13:09:24.923599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.923622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:03.231 [2024-07-15 13:09:24.923644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.231 [2024-07-15 13:09:24.923660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:03.489 passed 00:17:03.489 Test: blockdev nvme passthru rw ...passed 00:17:03.489 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:09:25.006252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.489 [2024-07-15 13:09:25.006290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:03.489 [2024-07-15 13:09:25.006472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.489 [2024-07-15 13:09:25.006496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:03.489 [2024-07-15 13:09:25.006675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.489 [2024-07-15 13:09:25.006699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:03.489 [2024-07-15 13:09:25.006872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.489 [2024-07-15 13:09:25.006903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:03.489 passed 00:17:03.489 Test: blockdev nvme admin passthru ...passed 00:17:03.489 Test: blockdev copy ...passed 00:17:03.489 00:17:03.489 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.489 suites 1 1 n/a 0 0 00:17:03.489 tests 23 23 23 0 0 00:17:03.489 asserts 152 152 152 0 n/a 00:17:03.489 00:17:03.489 Elapsed time = 1.524 seconds 00:17:03.746 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.746 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.746 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.003 rmmod nvme_tcp 00:17:04.003 rmmod nvme_fabrics 00:17:04.003 rmmod nvme_keyring 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3847139 ']' 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3847139 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3847139 ']' 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3847139 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3847139 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3847139' 00:17:04.003 killing process with pid 3847139 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3847139 00:17:04.003 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3847139 00:17:04.261 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.261 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.261 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.261 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.261 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.261 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.262 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.262 13:09:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.798 13:09:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.798 00:17:06.798 real 0m6.736s 00:17:06.798 user 0m12.187s 00:17:06.798 sys 0m2.568s 00:17:06.798 13:09:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.798 13:09:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:06.798 ************************************ 00:17:06.798 END TEST nvmf_bdevio_no_huge 00:17:06.798 ************************************ 00:17:06.798 13:09:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.798 13:09:27 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:06.798 13:09:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.798 13:09:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.798 13:09:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.798 ************************************ 00:17:06.798 START TEST nvmf_tls 00:17:06.798 ************************************ 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:06.798 * Looking for test storage... 00:17:06.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.798 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.799 13:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.706 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:17:08.707 00:17:08.707 --- 10.0.0.2 ping statistics --- 00:17:08.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.707 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:17:08.707 00:17:08.707 --- 10.0.0.1 ping statistics --- 00:17:08.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.707 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3849353 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3849353 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3849353 ']' 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.707 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.707 [2024-07-15 13:09:30.400203] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:08.708 [2024-07-15 13:09:30.400300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.966 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.966 [2024-07-15 13:09:30.473199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.966 [2024-07-15 13:09:30.588090] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.966 [2024-07-15 13:09:30.588155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.966 [2024-07-15 13:09:30.588181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.966 [2024-07-15 13:09:30.588195] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.966 [2024-07-15 13:09:30.588207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.966 [2024-07-15 13:09:30.588236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:08.966 13:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:09.223 true 00:17:09.223 13:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:09.223 13:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:09.480 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:09.480 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:09.480 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:09.736 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:09.736 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:09.993 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:09.993 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:09.993 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:10.250 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:10.250 13:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:10.507 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:10.507 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:10.507 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:10.507 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:10.765 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:10.765 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:10.765 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:11.022 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.022 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:11.279 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:11.279 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:11.279 13:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:11.536 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.536 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:11.792 13:09:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.oFzm7zxFRS 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.y5OdcfITrm 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.oFzm7zxFRS 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.y5OdcfITrm 00:17:12.049 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:12.306 13:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:12.571 13:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.oFzm7zxFRS 00:17:12.571 13:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oFzm7zxFRS 00:17:12.571 13:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:12.828 [2024-07-15 13:09:34.327661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.829 13:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:13.086 13:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:13.344 [2024-07-15 13:09:34.813028] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:13.344 [2024-07-15 13:09:34.813305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.344 13:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:13.601 malloc0 00:17:13.601 13:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:13.859 13:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oFzm7zxFRS 00:17:14.122 [2024-07-15 13:09:35.563152] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:14.123 13:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.oFzm7zxFRS 00:17:14.123 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.134 Initializing NVMe Controllers 00:17:24.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:24.134 Initialization complete. Launching workers. 00:17:24.134 ======================================================== 00:17:24.134 Latency(us) 00:17:24.134 Device Information : IOPS MiB/s Average min max 00:17:24.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7665.67 29.94 8351.43 1182.73 9631.66 00:17:24.134 ======================================================== 00:17:24.134 Total : 7665.67 29.94 8351.43 1182.73 9631.66 00:17:24.134 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oFzm7zxFRS 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oFzm7zxFRS' 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3851165 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3851165 /var/tmp/bdevperf.sock 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3851165 ']' 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.134 13:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.134 [2024-07-15 13:09:45.722318] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:24.134 [2024-07-15 13:09:45.722396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851165 ] 00:17:24.134 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.134 [2024-07-15 13:09:45.786290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.393 [2024-07-15 13:09:45.895701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.393 13:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.393 13:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:24.393 13:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oFzm7zxFRS 00:17:24.651 [2024-07-15 13:09:46.274767] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.651 [2024-07-15 13:09:46.274911] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:24.651 TLSTESTn1 00:17:24.909 13:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:24.909 Running I/O for 10 seconds... 00:17:34.875 00:17:34.875 Latency(us) 00:17:34.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.875 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:34.875 Verification LBA range: start 0x0 length 0x2000 00:17:34.875 TLSTESTn1 : 10.05 2816.31 11.00 0.00 0.00 45330.53 5873.97 68739.98 00:17:34.875 =================================================================================================================== 00:17:34.875 Total : 2816.31 11.00 0.00 0.00 45330.53 5873.97 68739.98 00:17:34.875 { 00:17:34.875 "core_count": 1, 00:17:34.875 "test_results": [ 00:17:34.875 { 00:17:34.875 "job": "TLSTESTn1", 00:17:34.875 "test_status": "finished", 00:17:34.875 "core_mask": "0x4", 00:17:34.875 "workload": "verify", 00:17:34.875 "verify_LBA_range_start": 0, 00:17:34.875 "verify_LBA_range_len": 8192, 00:17:34.875 "queue_depth": 128, 00:17:34.875 "io_size": 4096, 00:17:34.875 "runtime": 10.046478271484375, 00:17:34.875 "io_per_second": 2816.3103527425233, 00:17:34.875 "MiB_per_second": 11.001212315400482, 00:17:34.875 "fails_per_second": 0.0, 00:17:34.875 "timeout_per_second": 0.0, 00:17:34.875 "average_latency_us": 45330.532767632976, 00:17:34.875 "min_latency_us": 5873.967407407407, 00:17:34.875 "max_latency_us": 68739.98222222223 00:17:34.875 } 00:17:34.875 ] 00:17:34.875 } 00:17:34.875 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:34.875 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3851165 00:17:34.875 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3851165 ']' 00:17:34.875 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3851165 00:17:34.875 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:34.875 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.875 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3851165 00:17:35.134 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:35.134 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:35.134 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3851165' 00:17:35.134 killing process with pid 3851165 00:17:35.134 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3851165 00:17:35.134 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.134 00:17:35.134 Latency(us) 00:17:35.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.134 =================================================================================================================== 00:17:35.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.134 [2024-07-15 13:09:56.580029] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:35.134 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3851165 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y5OdcfITrm 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y5OdcfITrm 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y5OdcfITrm 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.y5OdcfITrm' 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3852448 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3852448 /var/tmp/bdevperf.sock 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3852448 ']' 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.393 13:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.393 [2024-07-15 13:09:56.890014] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:35.393 [2024-07-15 13:09:56.890107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852448 ] 00:17:35.393 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.393 [2024-07-15 13:09:56.946759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.393 [2024-07-15 13:09:57.050948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.652 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.652 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:35.652 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y5OdcfITrm 00:17:35.910 [2024-07-15 13:09:57.384522] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:35.910 [2024-07-15 13:09:57.384624] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:35.910 [2024-07-15 13:09:57.396316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:35.910 [2024-07-15 13:09:57.396507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e0f90 (107): Transport endpoint is not connected 00:17:35.910 [2024-07-15 13:09:57.397496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e0f90 (9): Bad file descriptor 00:17:35.910 [2024-07-15 13:09:57.398495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:35.910 [2024-07-15 13:09:57.398515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:35.910 [2024-07-15 13:09:57.398539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:35.910 request: 00:17:35.910 { 00:17:35.910 "name": "TLSTEST", 00:17:35.910 "trtype": "tcp", 00:17:35.910 "traddr": "10.0.0.2", 00:17:35.910 "adrfam": "ipv4", 00:17:35.910 "trsvcid": "4420", 00:17:35.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.910 "prchk_reftag": false, 00:17:35.910 "prchk_guard": false, 00:17:35.910 "hdgst": false, 00:17:35.910 "ddgst": false, 00:17:35.910 "psk": "/tmp/tmp.y5OdcfITrm", 00:17:35.910 "method": "bdev_nvme_attach_controller", 00:17:35.910 "req_id": 1 00:17:35.910 } 00:17:35.910 Got JSON-RPC error response 00:17:35.910 response: 00:17:35.910 { 00:17:35.910 "code": -5, 00:17:35.910 "message": "Input/output error" 00:17:35.910 } 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3852448 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3852448 ']' 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3852448 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852448 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852448' 00:17:35.910 killing process with pid 3852448 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3852448 00:17:35.910 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.910 00:17:35.910 Latency(us) 00:17:35.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.910 =================================================================================================================== 00:17:35.910 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.910 [2024-07-15 13:09:57.440435] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:35.910 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3852448 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oFzm7zxFRS 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oFzm7zxFRS 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oFzm7zxFRS 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oFzm7zxFRS' 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3852577 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3852577 /var/tmp/bdevperf.sock 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3852577 ']' 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.169 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.169 [2024-07-15 13:09:57.725166] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:36.169 [2024-07-15 13:09:57.725255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852577 ] 00:17:36.169 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.169 [2024-07-15 13:09:57.784972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.427 [2024-07-15 13:09:57.889112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.427 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.427 13:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:36.427 13:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.oFzm7zxFRS 00:17:36.685 [2024-07-15 13:09:58.220786] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.685 [2024-07-15 13:09:58.220941] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:36.685 [2024-07-15 13:09:58.228407] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:36.685 [2024-07-15 13:09:58.228440] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:36.685 [2024-07-15 13:09:58.228492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:36.685 [2024-07-15 13:09:58.228832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fef90 (107): Transport endpoint is not connected 00:17:36.685 [2024-07-15 13:09:58.229821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fef90 (9): Bad file descriptor 00:17:36.685 [2024-07-15 13:09:58.230821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:36.685 [2024-07-15 13:09:58.230856] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:36.685 [2024-07-15 13:09:58.230872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:36.685 request: 00:17:36.685 { 00:17:36.685 "name": "TLSTEST", 00:17:36.685 "trtype": "tcp", 00:17:36.685 "traddr": "10.0.0.2", 00:17:36.685 "adrfam": "ipv4", 00:17:36.685 "trsvcid": "4420", 00:17:36.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.685 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:36.685 "prchk_reftag": false, 00:17:36.685 "prchk_guard": false, 00:17:36.685 "hdgst": false, 00:17:36.685 "ddgst": false, 00:17:36.685 "psk": "/tmp/tmp.oFzm7zxFRS", 00:17:36.685 "method": "bdev_nvme_attach_controller", 00:17:36.685 "req_id": 1 00:17:36.685 } 00:17:36.685 Got JSON-RPC error response 00:17:36.685 response: 00:17:36.685 { 00:17:36.685 "code": -5, 00:17:36.685 "message": "Input/output error" 00:17:36.685 } 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3852577 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3852577 ']' 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3852577 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852577 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852577' 00:17:36.685 killing process with pid 3852577 00:17:36.685 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3852577 00:17:36.685 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.685 00:17:36.686 Latency(us) 00:17:36.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.686 =================================================================================================================== 00:17:36.686 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:36.686 [2024-07-15 13:09:58.278951] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:36.686 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3852577 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oFzm7zxFRS 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oFzm7zxFRS 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oFzm7zxFRS 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oFzm7zxFRS' 00:17:36.943 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3852715 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3852715 /var/tmp/bdevperf.sock 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3852715 ']' 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.944 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.944 [2024-07-15 13:09:58.579966] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:36.944 [2024-07-15 13:09:58.580043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852715 ] 00:17:36.944 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.944 [2024-07-15 13:09:58.638979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.202 [2024-07-15 13:09:58.744186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.202 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.202 13:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:37.202 13:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oFzm7zxFRS 00:17:37.460 [2024-07-15 13:09:59.071449] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.460 [2024-07-15 13:09:59.071556] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.460 [2024-07-15 13:09:59.080553] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:37.460 [2024-07-15 13:09:59.080584] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:37.461 [2024-07-15 13:09:59.080629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:37.461 [2024-07-15 13:09:59.081493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203af90 (107): Transport endpoint is not connected 00:17:37.461 [2024-07-15 13:09:59.082487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203af90 (9): Bad file descriptor 00:17:37.461 [2024-07-15 13:09:59.083485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:37.461 [2024-07-15 13:09:59.083505] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:37.461 [2024-07-15 13:09:59.083531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:37.461 request: 00:17:37.461 { 00:17:37.461 "name": "TLSTEST", 00:17:37.461 "trtype": "tcp", 00:17:37.461 "traddr": "10.0.0.2", 00:17:37.461 "adrfam": "ipv4", 00:17:37.461 "trsvcid": "4420", 00:17:37.461 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:37.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.461 "prchk_reftag": false, 00:17:37.461 "prchk_guard": false, 00:17:37.461 "hdgst": false, 00:17:37.461 "ddgst": false, 00:17:37.461 "psk": "/tmp/tmp.oFzm7zxFRS", 00:17:37.461 "method": "bdev_nvme_attach_controller", 00:17:37.461 "req_id": 1 00:17:37.461 } 00:17:37.461 Got JSON-RPC error response 00:17:37.461 response: 00:17:37.461 { 00:17:37.461 "code": -5, 00:17:37.461 "message": "Input/output error" 00:17:37.461 } 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3852715 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3852715 ']' 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3852715 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852715 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852715' 00:17:37.461 killing process with pid 3852715 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3852715 00:17:37.461 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.461 00:17:37.461 Latency(us) 00:17:37.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.461 =================================================================================================================== 00:17:37.461 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:37.461 [2024-07-15 13:09:59.128673] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:37.461 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3852715 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3852760 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3852760 /var/tmp/bdevperf.sock 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3852760 ']' 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.719 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.978 [2024-07-15 13:09:59.423642] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:37.978 [2024-07-15 13:09:59.423723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852760 ] 00:17:37.978 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.978 [2024-07-15 13:09:59.482580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.978 [2024-07-15 13:09:59.587428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.237 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.237 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:38.237 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:38.237 [2024-07-15 13:09:59.922831] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:38.237 [2024-07-15 13:09:59.924678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069770 (9): Bad file descriptor 00:17:38.237 [2024-07-15 13:09:59.925674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:38.237 [2024-07-15 13:09:59.925694] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:38.237 [2024-07-15 13:09:59.925711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.237 request: 00:17:38.237 { 00:17:38.237 "name": "TLSTEST", 00:17:38.237 "trtype": "tcp", 00:17:38.237 "traddr": "10.0.0.2", 00:17:38.237 "adrfam": "ipv4", 00:17:38.237 "trsvcid": "4420", 00:17:38.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.237 "prchk_reftag": false, 00:17:38.237 "prchk_guard": false, 00:17:38.237 "hdgst": false, 00:17:38.237 "ddgst": false, 00:17:38.237 "method": "bdev_nvme_attach_controller", 00:17:38.237 "req_id": 1 00:17:38.237 } 00:17:38.237 Got JSON-RPC error response 00:17:38.237 response: 00:17:38.237 { 00:17:38.237 "code": -5, 00:17:38.237 "message": "Input/output error" 00:17:38.237 } 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3852760 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3852760 ']' 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3852760 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852760 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852760' 00:17:38.495 killing process with pid 3852760 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3852760 00:17:38.495 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.495 00:17:38.495 Latency(us) 00:17:38.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.495 =================================================================================================================== 00:17:38.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.495 13:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3852760 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3849353 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3849353 ']' 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3849353 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3849353 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3849353' 00:17:38.753 killing process with pid 3849353 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3849353 00:17:38.753 [2024-07-15 13:10:00.250348] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:38.753 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3849353 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IMVN0fzbqL 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IMVN0fzbqL 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3852990 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3852990 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3852990 ']' 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.012 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.012 [2024-07-15 13:10:00.619937] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:39.012 [2024-07-15 13:10:00.620026] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.012 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.012 [2024-07-15 13:10:00.682308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.271 [2024-07-15 13:10:00.789663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.271 [2024-07-15 13:10:00.789731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.271 [2024-07-15 13:10:00.789744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.271 [2024-07-15 13:10:00.789755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.271 [2024-07-15 13:10:00.789765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.271 [2024-07-15 13:10:00.789791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IMVN0fzbqL 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMVN0fzbqL 00:17:39.271 13:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.529 [2024-07-15 13:10:01.162941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.529 13:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:39.788 13:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.047 [2024-07-15 13:10:01.644198] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.047 [2024-07-15 13:10:01.644454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.047 13:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.304 malloc0 00:17:40.304 13:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.561 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMVN0fzbqL 00:17:40.819 [2024-07-15 13:10:02.477031] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMVN0fzbqL 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IMVN0fzbqL' 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3853173 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3853173 /var/tmp/bdevperf.sock 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3853173 ']' 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.820 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.078 [2024-07-15 13:10:02.542589] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:41.078 [2024-07-15 13:10:02.542674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853173 ] 00:17:41.078 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.078 [2024-07-15 13:10:02.600485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.078 [2024-07-15 13:10:02.705262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.335 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.335 13:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:41.335 13:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMVN0fzbqL 00:17:41.335 [2024-07-15 13:10:03.034623] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.335 [2024-07-15 13:10:03.034738] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.593 TLSTESTn1 00:17:41.593 13:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:41.593 Running I/O for 10 seconds... 00:17:53.826 00:17:53.826 Latency(us) 00:17:53.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.826 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.826 Verification LBA range: start 0x0 length 0x2000 00:17:53.826 TLSTESTn1 : 10.04 2820.27 11.02 0.00 0.00 45272.60 7378.87 71846.87 00:17:53.826 =================================================================================================================== 00:17:53.826 Total : 2820.27 11.02 0.00 0.00 45272.60 7378.87 71846.87 00:17:53.826 { 00:17:53.826 "core_count": 1, 00:17:53.826 "test_results": [ 00:17:53.826 { 00:17:53.826 "job": "TLSTESTn1", 00:17:53.826 "test_status": "finished", 00:17:53.826 "core_mask": "0x4", 00:17:53.826 "workload": "verify", 00:17:53.826 "verify_LBA_range_start": 0, 00:17:53.826 "verify_LBA_range_len": 8192, 00:17:53.826 "queue_depth": 128, 00:17:53.826 "io_size": 4096, 00:17:53.826 "runtime": 10.04266357421875, 00:17:53.826 "io_per_second": 2820.2676102675546, 00:17:53.826 "MiB_per_second": 11.016670352607635, 00:17:53.826 "fails_per_second": 0.0, 00:17:53.826 "timeout_per_second": 0.0, 00:17:53.826 "average_latency_us": 45272.59901626868, 00:17:53.826 "min_latency_us": 7378.868148148148, 00:17:53.826 "max_latency_us": 71846.87407407408 00:17:53.826 } 00:17:53.826 ] 00:17:53.826 } 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3853173 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3853173 ']' 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3853173 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3853173 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3853173' 00:17:53.826 killing process with pid 3853173 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3853173 00:17:53.826 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.826 00:17:53.826 Latency(us) 00:17:53.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.826 =================================================================================================================== 00:17:53.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.826 [2024-07-15 13:10:13.352023] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3853173 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IMVN0fzbqL 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMVN0fzbqL 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMVN0fzbqL 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMVN0fzbqL 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IMVN0fzbqL' 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3854504 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3854504 /var/tmp/bdevperf.sock 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3854504 ']' 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.826 [2024-07-15 13:10:13.673532] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:53.826 [2024-07-15 13:10:13.673624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854504 ] 00:17:53.826 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.826 [2024-07-15 13:10:13.733032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.826 [2024-07-15 13:10:13.840450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.826 13:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMVN0fzbqL 00:17:53.826 [2024-07-15 13:10:14.198242] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.826 [2024-07-15 13:10:14.198323] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:53.826 [2024-07-15 13:10:14.198337] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IMVN0fzbqL 00:17:53.826 request: 00:17:53.826 { 00:17:53.826 "name": "TLSTEST", 00:17:53.826 "trtype": "tcp", 00:17:53.826 "traddr": "10.0.0.2", 00:17:53.826 "adrfam": "ipv4", 00:17:53.826 "trsvcid": "4420", 00:17:53.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.826 "prchk_reftag": false, 00:17:53.826 "prchk_guard": false, 00:17:53.826 "hdgst": false, 00:17:53.826 "ddgst": false, 00:17:53.826 "psk": "/tmp/tmp.IMVN0fzbqL", 00:17:53.826 "method": "bdev_nvme_attach_controller", 00:17:53.826 "req_id": 1 00:17:53.826 } 00:17:53.826 Got JSON-RPC error response 00:17:53.826 response: 00:17:53.826 { 00:17:53.826 "code": -1, 00:17:53.827 "message": "Operation not permitted" 00:17:53.827 } 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3854504 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3854504 ']' 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3854504 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3854504 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3854504' 00:17:53.827 killing process with pid 3854504 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3854504 00:17:53.827 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.827 00:17:53.827 Latency(us) 00:17:53.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.827 =================================================================================================================== 00:17:53.827 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3854504 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3852990 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3852990 ']' 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3852990 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852990 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852990' 00:17:53.827 killing process with pid 3852990 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3852990 00:17:53.827 [2024-07-15 13:10:14.539750] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3852990 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3854645 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3854645 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3854645 ']' 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.827 13:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.827 [2024-07-15 13:10:14.900449] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:53.827 [2024-07-15 13:10:14.900553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.827 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.827 [2024-07-15 13:10:14.969016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.827 [2024-07-15 13:10:15.085353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.827 [2024-07-15 13:10:15.085411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.827 [2024-07-15 13:10:15.085436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.827 [2024-07-15 13:10:15.085449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.827 [2024-07-15 13:10:15.085460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.827 [2024-07-15 13:10:15.085489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IMVN0fzbqL 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IMVN0fzbqL 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.IMVN0fzbqL 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMVN0fzbqL 00:17:54.392 13:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.392 [2024-07-15 13:10:16.074142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.651 13:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:54.651 13:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.908 [2024-07-15 13:10:16.571486] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.909 [2024-07-15 13:10:16.571757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.909 13:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:55.474 malloc0 00:17:55.474 13:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.474 13:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMVN0fzbqL 00:17:55.732 [2024-07-15 13:10:17.341307] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:55.732 [2024-07-15 13:10:17.341346] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:55.732 [2024-07-15 13:10:17.341391] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:55.732 request: 00:17:55.732 { 00:17:55.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.732 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.732 "psk": "/tmp/tmp.IMVN0fzbqL", 00:17:55.732 "method": "nvmf_subsystem_add_host", 00:17:55.732 "req_id": 1 00:17:55.732 } 00:17:55.732 Got JSON-RPC error response 00:17:55.732 response: 00:17:55.732 { 00:17:55.732 "code": -32603, 00:17:55.732 "message": "Internal error" 00:17:55.732 } 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3854645 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3854645 ']' 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3854645 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3854645 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3854645' 00:17:55.732 killing process with pid 3854645 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3854645 00:17:55.732 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3854645 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IMVN0fzbqL 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3855071 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3855071 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3855071 ']' 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.991 13:10:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.250 [2024-07-15 13:10:17.730943] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:56.250 [2024-07-15 13:10:17.731011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.250 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.250 [2024-07-15 13:10:17.795780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.250 [2024-07-15 13:10:17.909641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.250 [2024-07-15 13:10:17.909717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.250 [2024-07-15 13:10:17.909742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.250 [2024-07-15 13:10:17.909756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.250 [2024-07-15 13:10:17.909767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.250 [2024-07-15 13:10:17.909796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IMVN0fzbqL 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMVN0fzbqL 00:17:57.184 13:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.442 [2024-07-15 13:10:18.977918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.442 13:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:57.700 13:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:57.958 [2024-07-15 13:10:19.555424] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.958 [2024-07-15 13:10:19.555679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.958 13:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:58.216 malloc0 00:17:58.216 13:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:58.473 13:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMVN0fzbqL 00:17:58.731 [2024-07-15 13:10:20.373450] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3855367 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3855367 /var/tmp/bdevperf.sock 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3855367 ']' 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.731 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 [2024-07-15 13:10:20.429463] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:17:58.731 [2024-07-15 13:10:20.429547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855367 ] 00:17:58.989 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.989 [2024-07-15 13:10:20.487563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.989 [2024-07-15 13:10:20.595704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.248 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.248 13:10:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:59.248 13:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMVN0fzbqL 00:17:59.248 [2024-07-15 13:10:20.929829] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.248 [2024-07-15 13:10:20.929983] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:59.506 TLSTESTn1 00:17:59.506 13:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:59.763 13:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:59.763 "subsystems": [ 00:17:59.763 { 00:17:59.763 "subsystem": "keyring", 00:17:59.763 "config": [] 00:17:59.763 }, 00:17:59.763 { 00:17:59.763 "subsystem": "iobuf", 00:17:59.763 "config": [ 00:17:59.763 { 00:17:59.763 "method": "iobuf_set_options", 00:17:59.763 "params": { 00:17:59.763 "small_pool_count": 8192, 00:17:59.763 "large_pool_count": 1024, 00:17:59.763 "small_bufsize": 8192, 00:17:59.763 "large_bufsize": 135168 00:17:59.763 } 00:17:59.763 } 00:17:59.763 ] 00:17:59.763 }, 00:17:59.763 { 00:17:59.763 "subsystem": "sock", 00:17:59.763 "config": [ 00:17:59.763 { 00:17:59.763 "method": "sock_set_default_impl", 00:17:59.763 "params": { 00:17:59.763 "impl_name": "posix" 00:17:59.763 } 00:17:59.763 }, 00:17:59.763 { 00:17:59.763 "method": "sock_impl_set_options", 00:17:59.763 "params": { 00:17:59.763 "impl_name": "ssl", 00:17:59.763 "recv_buf_size": 4096, 00:17:59.763 "send_buf_size": 4096, 00:17:59.763 "enable_recv_pipe": true, 00:17:59.763 "enable_quickack": false, 00:17:59.763 "enable_placement_id": 0, 00:17:59.763 "enable_zerocopy_send_server": true, 00:17:59.763 "enable_zerocopy_send_client": false, 00:17:59.763 "zerocopy_threshold": 0, 00:17:59.763 "tls_version": 0, 00:17:59.763 "enable_ktls": false 00:17:59.763 } 00:17:59.763 }, 00:17:59.763 { 00:17:59.763 "method": "sock_impl_set_options", 00:17:59.763 "params": { 00:17:59.763 "impl_name": "posix", 00:17:59.763 "recv_buf_size": 2097152, 00:17:59.763 "send_buf_size": 2097152, 00:17:59.763 "enable_recv_pipe": true, 00:17:59.763 "enable_quickack": false, 00:17:59.763 "enable_placement_id": 0, 00:17:59.763 "enable_zerocopy_send_server": true, 00:17:59.763 "enable_zerocopy_send_client": false, 00:17:59.763 "zerocopy_threshold": 0, 00:17:59.763 "tls_version": 0, 00:17:59.763 "enable_ktls": false 00:17:59.763 } 00:17:59.763 } 00:17:59.763 ] 00:17:59.763 }, 00:17:59.763 { 00:17:59.763 "subsystem": "vmd", 00:17:59.763 "config": [] 00:17:59.763 }, 00:17:59.763 { 00:17:59.763 "subsystem": "accel", 00:17:59.764 "config": [ 00:17:59.764 { 00:17:59.764 "method": "accel_set_options", 00:17:59.764 "params": { 00:17:59.764 "small_cache_size": 128, 00:17:59.764 "large_cache_size": 16, 00:17:59.764 "task_count": 2048, 00:17:59.764 "sequence_count": 2048, 00:17:59.764 "buf_count": 2048 00:17:59.764 } 00:17:59.764 } 00:17:59.764 ] 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "subsystem": "bdev", 00:17:59.764 "config": [ 00:17:59.764 { 00:17:59.764 "method": "bdev_set_options", 00:17:59.764 "params": { 00:17:59.764 "bdev_io_pool_size": 65535, 00:17:59.764 "bdev_io_cache_size": 256, 00:17:59.764 "bdev_auto_examine": true, 00:17:59.764 "iobuf_small_cache_size": 128, 00:17:59.764 "iobuf_large_cache_size": 16 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "bdev_raid_set_options", 00:17:59.764 "params": { 00:17:59.764 "process_window_size_kb": 1024 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "bdev_iscsi_set_options", 00:17:59.764 "params": { 00:17:59.764 "timeout_sec": 30 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "bdev_nvme_set_options", 00:17:59.764 "params": { 00:17:59.764 "action_on_timeout": "none", 00:17:59.764 "timeout_us": 0, 00:17:59.764 "timeout_admin_us": 0, 00:17:59.764 "keep_alive_timeout_ms": 10000, 00:17:59.764 "arbitration_burst": 0, 00:17:59.764 "low_priority_weight": 0, 00:17:59.764 "medium_priority_weight": 0, 00:17:59.764 "high_priority_weight": 0, 00:17:59.764 "nvme_adminq_poll_period_us": 10000, 00:17:59.764 "nvme_ioq_poll_period_us": 0, 00:17:59.764 "io_queue_requests": 0, 00:17:59.764 "delay_cmd_submit": true, 00:17:59.764 "transport_retry_count": 4, 00:17:59.764 "bdev_retry_count": 3, 00:17:59.764 "transport_ack_timeout": 0, 00:17:59.764 "ctrlr_loss_timeout_sec": 0, 00:17:59.764 "reconnect_delay_sec": 0, 00:17:59.764 "fast_io_fail_timeout_sec": 0, 00:17:59.764 "disable_auto_failback": false, 00:17:59.764 "generate_uuids": false, 00:17:59.764 "transport_tos": 0, 00:17:59.764 "nvme_error_stat": false, 00:17:59.764 "rdma_srq_size": 0, 00:17:59.764 "io_path_stat": false, 00:17:59.764 "allow_accel_sequence": false, 00:17:59.764 "rdma_max_cq_size": 0, 00:17:59.764 "rdma_cm_event_timeout_ms": 0, 00:17:59.764 "dhchap_digests": [ 00:17:59.764 "sha256", 00:17:59.764 "sha384", 00:17:59.764 "sha512" 00:17:59.764 ], 00:17:59.764 "dhchap_dhgroups": [ 00:17:59.764 "null", 00:17:59.764 "ffdhe2048", 00:17:59.764 "ffdhe3072", 00:17:59.764 "ffdhe4096", 00:17:59.764 "ffdhe6144", 00:17:59.764 "ffdhe8192" 00:17:59.764 ] 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "bdev_nvme_set_hotplug", 00:17:59.764 "params": { 00:17:59.764 "period_us": 100000, 00:17:59.764 "enable": false 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "bdev_malloc_create", 00:17:59.764 "params": { 00:17:59.764 "name": "malloc0", 00:17:59.764 "num_blocks": 8192, 00:17:59.764 "block_size": 4096, 00:17:59.764 "physical_block_size": 4096, 00:17:59.764 "uuid": "69478be1-91b7-43ec-b06b-853eae8c8806", 00:17:59.764 "optimal_io_boundary": 0 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "bdev_wait_for_examine" 00:17:59.764 } 00:17:59.764 ] 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "subsystem": "nbd", 00:17:59.764 "config": [] 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "subsystem": "scheduler", 00:17:59.764 "config": [ 00:17:59.764 { 00:17:59.764 "method": "framework_set_scheduler", 00:17:59.764 "params": { 00:17:59.764 "name": "static" 00:17:59.764 } 00:17:59.764 } 00:17:59.764 ] 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "subsystem": "nvmf", 00:17:59.764 "config": [ 00:17:59.764 { 00:17:59.764 "method": "nvmf_set_config", 00:17:59.764 "params": { 00:17:59.764 "discovery_filter": "match_any", 00:17:59.764 "admin_cmd_passthru": { 00:17:59.764 "identify_ctrlr": false 00:17:59.764 } 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "nvmf_set_max_subsystems", 00:17:59.764 "params": { 00:17:59.764 "max_subsystems": 1024 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "nvmf_set_crdt", 00:17:59.764 "params": { 00:17:59.764 "crdt1": 0, 00:17:59.764 "crdt2": 0, 00:17:59.764 "crdt3": 0 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "nvmf_create_transport", 00:17:59.764 "params": { 00:17:59.764 "trtype": "TCP", 00:17:59.764 "max_queue_depth": 128, 00:17:59.764 "max_io_qpairs_per_ctrlr": 127, 00:17:59.764 "in_capsule_data_size": 4096, 00:17:59.764 "max_io_size": 131072, 00:17:59.764 "io_unit_size": 131072, 00:17:59.764 "max_aq_depth": 128, 00:17:59.764 "num_shared_buffers": 511, 00:17:59.764 "buf_cache_size": 4294967295, 00:17:59.764 "dif_insert_or_strip": false, 00:17:59.764 "zcopy": false, 00:17:59.764 "c2h_success": false, 00:17:59.764 "sock_priority": 0, 00:17:59.764 "abort_timeout_sec": 1, 00:17:59.764 "ack_timeout": 0, 00:17:59.764 "data_wr_pool_size": 0 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "nvmf_create_subsystem", 00:17:59.764 "params": { 00:17:59.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.764 "allow_any_host": false, 00:17:59.764 "serial_number": "SPDK00000000000001", 00:17:59.764 "model_number": "SPDK bdev Controller", 00:17:59.764 "max_namespaces": 10, 00:17:59.764 "min_cntlid": 1, 00:17:59.764 "max_cntlid": 65519, 00:17:59.764 "ana_reporting": false 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "nvmf_subsystem_add_host", 00:17:59.764 "params": { 00:17:59.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.764 "host": "nqn.2016-06.io.spdk:host1", 00:17:59.764 "psk": "/tmp/tmp.IMVN0fzbqL" 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "nvmf_subsystem_add_ns", 00:17:59.764 "params": { 00:17:59.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.764 "namespace": { 00:17:59.764 "nsid": 1, 00:17:59.764 "bdev_name": "malloc0", 00:17:59.764 "nguid": "69478BE191B743ECB06B853EAE8C8806", 00:17:59.764 "uuid": "69478be1-91b7-43ec-b06b-853eae8c8806", 00:17:59.764 "no_auto_visible": false 00:17:59.764 } 00:17:59.764 } 00:17:59.764 }, 00:17:59.764 { 00:17:59.764 "method": "nvmf_subsystem_add_listener", 00:17:59.764 "params": { 00:17:59.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.764 "listen_address": { 00:17:59.764 "trtype": "TCP", 00:17:59.764 "adrfam": "IPv4", 00:17:59.764 "traddr": "10.0.0.2", 00:17:59.764 "trsvcid": "4420" 00:17:59.764 }, 00:17:59.764 "secure_channel": true 00:17:59.764 } 00:17:59.764 } 00:17:59.764 ] 00:17:59.764 } 00:17:59.764 ] 00:17:59.764 }' 00:17:59.764 13:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:00.022 13:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:00.022 "subsystems": [ 00:18:00.022 { 00:18:00.022 "subsystem": "keyring", 00:18:00.022 "config": [] 00:18:00.022 }, 00:18:00.022 { 00:18:00.022 "subsystem": "iobuf", 00:18:00.022 "config": [ 00:18:00.022 { 00:18:00.022 "method": "iobuf_set_options", 00:18:00.022 "params": { 00:18:00.023 "small_pool_count": 8192, 00:18:00.023 "large_pool_count": 1024, 00:18:00.023 "small_bufsize": 8192, 00:18:00.023 "large_bufsize": 135168 00:18:00.023 } 00:18:00.023 } 00:18:00.023 ] 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "subsystem": "sock", 00:18:00.023 "config": [ 00:18:00.023 { 00:18:00.023 "method": "sock_set_default_impl", 00:18:00.023 "params": { 00:18:00.023 "impl_name": "posix" 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "sock_impl_set_options", 00:18:00.023 "params": { 00:18:00.023 "impl_name": "ssl", 00:18:00.023 "recv_buf_size": 4096, 00:18:00.023 "send_buf_size": 4096, 00:18:00.023 "enable_recv_pipe": true, 00:18:00.023 "enable_quickack": false, 00:18:00.023 "enable_placement_id": 0, 00:18:00.023 "enable_zerocopy_send_server": true, 00:18:00.023 "enable_zerocopy_send_client": false, 00:18:00.023 "zerocopy_threshold": 0, 00:18:00.023 "tls_version": 0, 00:18:00.023 "enable_ktls": false 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "sock_impl_set_options", 00:18:00.023 "params": { 00:18:00.023 "impl_name": "posix", 00:18:00.023 "recv_buf_size": 2097152, 00:18:00.023 "send_buf_size": 2097152, 00:18:00.023 "enable_recv_pipe": true, 00:18:00.023 "enable_quickack": false, 00:18:00.023 "enable_placement_id": 0, 00:18:00.023 "enable_zerocopy_send_server": true, 00:18:00.023 "enable_zerocopy_send_client": false, 00:18:00.023 "zerocopy_threshold": 0, 00:18:00.023 "tls_version": 0, 00:18:00.023 "enable_ktls": false 00:18:00.023 } 00:18:00.023 } 00:18:00.023 ] 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "subsystem": "vmd", 00:18:00.023 "config": [] 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "subsystem": "accel", 00:18:00.023 "config": [ 00:18:00.023 { 00:18:00.023 "method": "accel_set_options", 00:18:00.023 "params": { 00:18:00.023 "small_cache_size": 128, 00:18:00.023 "large_cache_size": 16, 00:18:00.023 "task_count": 2048, 00:18:00.023 "sequence_count": 2048, 00:18:00.023 "buf_count": 2048 00:18:00.023 } 00:18:00.023 } 00:18:00.023 ] 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "subsystem": "bdev", 00:18:00.023 "config": [ 00:18:00.023 { 00:18:00.023 "method": "bdev_set_options", 00:18:00.023 "params": { 00:18:00.023 "bdev_io_pool_size": 65535, 00:18:00.023 "bdev_io_cache_size": 256, 00:18:00.023 "bdev_auto_examine": true, 00:18:00.023 "iobuf_small_cache_size": 128, 00:18:00.023 "iobuf_large_cache_size": 16 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "bdev_raid_set_options", 00:18:00.023 "params": { 00:18:00.023 "process_window_size_kb": 1024 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "bdev_iscsi_set_options", 00:18:00.023 "params": { 00:18:00.023 "timeout_sec": 30 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "bdev_nvme_set_options", 00:18:00.023 "params": { 00:18:00.023 "action_on_timeout": "none", 00:18:00.023 "timeout_us": 0, 00:18:00.023 "timeout_admin_us": 0, 00:18:00.023 "keep_alive_timeout_ms": 10000, 00:18:00.023 "arbitration_burst": 0, 00:18:00.023 "low_priority_weight": 0, 00:18:00.023 "medium_priority_weight": 0, 00:18:00.023 "high_priority_weight": 0, 00:18:00.023 "nvme_adminq_poll_period_us": 10000, 00:18:00.023 "nvme_ioq_poll_period_us": 0, 00:18:00.023 "io_queue_requests": 512, 00:18:00.023 "delay_cmd_submit": true, 00:18:00.023 "transport_retry_count": 4, 00:18:00.023 "bdev_retry_count": 3, 00:18:00.023 "transport_ack_timeout": 0, 00:18:00.023 "ctrlr_loss_timeout_sec": 0, 00:18:00.023 "reconnect_delay_sec": 0, 00:18:00.023 "fast_io_fail_timeout_sec": 0, 00:18:00.023 "disable_auto_failback": false, 00:18:00.023 "generate_uuids": false, 00:18:00.023 "transport_tos": 0, 00:18:00.023 "nvme_error_stat": false, 00:18:00.023 "rdma_srq_size": 0, 00:18:00.023 "io_path_stat": false, 00:18:00.023 "allow_accel_sequence": false, 00:18:00.023 "rdma_max_cq_size": 0, 00:18:00.023 "rdma_cm_event_timeout_ms": 0, 00:18:00.023 "dhchap_digests": [ 00:18:00.023 "sha256", 00:18:00.023 "sha384", 00:18:00.023 "sha512" 00:18:00.023 ], 00:18:00.023 "dhchap_dhgroups": [ 00:18:00.023 "null", 00:18:00.023 "ffdhe2048", 00:18:00.023 "ffdhe3072", 00:18:00.023 "ffdhe4096", 00:18:00.023 "ffdhe6144", 00:18:00.023 "ffdhe8192" 00:18:00.023 ] 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "bdev_nvme_attach_controller", 00:18:00.023 "params": { 00:18:00.023 "name": "TLSTEST", 00:18:00.023 "trtype": "TCP", 00:18:00.023 "adrfam": "IPv4", 00:18:00.023 "traddr": "10.0.0.2", 00:18:00.023 "trsvcid": "4420", 00:18:00.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.023 "prchk_reftag": false, 00:18:00.023 "prchk_guard": false, 00:18:00.023 "ctrlr_loss_timeout_sec": 0, 00:18:00.023 "reconnect_delay_sec": 0, 00:18:00.023 "fast_io_fail_timeout_sec": 0, 00:18:00.023 "psk": "/tmp/tmp.IMVN0fzbqL", 00:18:00.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.023 "hdgst": false, 00:18:00.023 "ddgst": false 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "bdev_nvme_set_hotplug", 00:18:00.023 "params": { 00:18:00.023 "period_us": 100000, 00:18:00.023 "enable": false 00:18:00.023 } 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "method": "bdev_wait_for_examine" 00:18:00.023 } 00:18:00.023 ] 00:18:00.023 }, 00:18:00.023 { 00:18:00.023 "subsystem": "nbd", 00:18:00.023 "config": [] 00:18:00.023 } 00:18:00.023 ] 00:18:00.023 }' 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3855367 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3855367 ']' 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3855367 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855367 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855367' 00:18:00.023 killing process with pid 3855367 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3855367 00:18:00.023 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.023 00:18:00.023 Latency(us) 00:18:00.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.023 =================================================================================================================== 00:18:00.023 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.023 [2024-07-15 13:10:21.670237] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:00.023 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3855367 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3855071 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3855071 ']' 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3855071 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855071 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855071' 00:18:00.282 killing process with pid 3855071 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3855071 00:18:00.282 [2024-07-15 13:10:21.969126] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:00.282 13:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3855071 00:18:00.849 13:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:00.849 13:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.849 13:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:00.849 "subsystems": [ 00:18:00.849 { 00:18:00.849 "subsystem": "keyring", 00:18:00.849 "config": [] 00:18:00.849 }, 00:18:00.849 { 00:18:00.849 "subsystem": "iobuf", 00:18:00.849 "config": [ 00:18:00.849 { 00:18:00.849 "method": "iobuf_set_options", 00:18:00.849 "params": { 00:18:00.849 "small_pool_count": 8192, 00:18:00.849 "large_pool_count": 1024, 00:18:00.849 "small_bufsize": 8192, 00:18:00.849 "large_bufsize": 135168 00:18:00.849 } 00:18:00.849 } 00:18:00.849 ] 00:18:00.849 }, 00:18:00.849 { 00:18:00.849 "subsystem": "sock", 00:18:00.849 "config": [ 00:18:00.849 { 00:18:00.849 "method": "sock_set_default_impl", 00:18:00.849 "params": { 00:18:00.849 "impl_name": "posix" 00:18:00.849 } 00:18:00.849 }, 00:18:00.849 { 00:18:00.849 "method": "sock_impl_set_options", 00:18:00.849 "params": { 00:18:00.849 "impl_name": "ssl", 00:18:00.849 "recv_buf_size": 4096, 00:18:00.849 "send_buf_size": 4096, 00:18:00.849 "enable_recv_pipe": true, 00:18:00.849 "enable_quickack": false, 00:18:00.849 "enable_placement_id": 0, 00:18:00.849 "enable_zerocopy_send_server": true, 00:18:00.849 "enable_zerocopy_send_client": false, 00:18:00.849 "zerocopy_threshold": 0, 00:18:00.849 "tls_version": 0, 00:18:00.849 "enable_ktls": false 00:18:00.849 } 00:18:00.849 }, 00:18:00.849 { 00:18:00.849 "method": "sock_impl_set_options", 00:18:00.849 "params": { 00:18:00.849 "impl_name": "posix", 00:18:00.849 "recv_buf_size": 2097152, 00:18:00.849 "send_buf_size": 2097152, 00:18:00.849 "enable_recv_pipe": true, 00:18:00.849 "enable_quickack": false, 00:18:00.849 "enable_placement_id": 0, 00:18:00.849 "enable_zerocopy_send_server": true, 00:18:00.849 "enable_zerocopy_send_client": false, 00:18:00.849 "zerocopy_threshold": 0, 00:18:00.849 "tls_version": 0, 00:18:00.849 "enable_ktls": false 00:18:00.849 } 00:18:00.849 } 00:18:00.849 ] 00:18:00.849 }, 00:18:00.849 { 00:18:00.850 "subsystem": "vmd", 00:18:00.850 "config": [] 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "subsystem": "accel", 00:18:00.850 "config": [ 00:18:00.850 { 00:18:00.850 "method": "accel_set_options", 00:18:00.850 "params": { 00:18:00.850 "small_cache_size": 128, 00:18:00.850 "large_cache_size": 16, 00:18:00.850 "task_count": 2048, 00:18:00.850 "sequence_count": 2048, 00:18:00.850 "buf_count": 2048 00:18:00.850 } 00:18:00.850 } 00:18:00.850 ] 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "subsystem": "bdev", 00:18:00.850 "config": [ 00:18:00.850 { 00:18:00.850 "method": "bdev_set_options", 00:18:00.850 "params": { 00:18:00.850 "bdev_io_pool_size": 65535, 00:18:00.850 "bdev_io_cache_size": 256, 00:18:00.850 "bdev_auto_examine": true, 00:18:00.850 "iobuf_small_cache_size": 128, 00:18:00.850 "iobuf_large_cache_size": 16 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "bdev_raid_set_options", 00:18:00.850 "params": { 00:18:00.850 "process_window_size_kb": 1024 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "bdev_iscsi_set_options", 00:18:00.850 "params": { 00:18:00.850 "timeout_sec": 30 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "bdev_nvme_set_options", 00:18:00.850 "params": { 00:18:00.850 "action_on_timeout": "none", 00:18:00.850 "timeout_us": 0, 00:18:00.850 "timeout_admin_us": 0, 00:18:00.850 "keep_alive_timeout_ms": 10000, 00:18:00.850 "arbitration_burst": 0, 00:18:00.850 "low_priority_weight": 0, 00:18:00.850 "medium_priority_weight": 0, 00:18:00.850 "high_priority_weight": 0, 00:18:00.850 "nvme_adminq_poll_period_us": 10000, 00:18:00.850 "nvme_ioq_poll_period_us": 0, 00:18:00.850 "io_queue_requests": 0, 00:18:00.850 "delay_cmd_submit": true, 00:18:00.850 "transport_retry_count": 4, 00:18:00.850 "bdev_retry_count": 3, 00:18:00.850 "transport_ack_timeout": 0, 00:18:00.850 "ctrlr_loss_timeout_sec": 0, 00:18:00.850 "reconnect_delay_sec": 0, 00:18:00.850 "fast_io_fail_timeout_sec": 0, 00:18:00.850 "disable_auto_failback": false, 00:18:00.850 "generate_uuids": false, 00:18:00.850 "transport_tos": 0, 00:18:00.850 "nvme_error_stat": false, 00:18:00.850 "rdma_srq_size": 0, 00:18:00.850 "io_path_stat": false, 00:18:00.850 "allow_accel_sequence": false, 00:18:00.850 "rdma_max_cq_size": 0, 00:18:00.850 "rdma_cm_event_timeout_ms": 0, 00:18:00.850 "dhchap_digests": [ 00:18:00.850 "sha256", 00:18:00.850 "sha384", 00:18:00.850 "sha512" 00:18:00.850 ], 00:18:00.850 "dhchap_dhgroups": [ 00:18:00.850 "null", 00:18:00.850 "ffdhe2048", 00:18:00.850 "ffdhe3072", 00:18:00.850 "ffdhe4096", 00:18:00.850 "ffdhe6144", 00:18:00.850 "ffdhe8192" 00:18:00.850 ] 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "bdev_nvme_set_hotplug", 00:18:00.850 "params": { 00:18:00.850 "period_us": 100000, 00:18:00.850 "enable": false 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "bdev_malloc_create", 00:18:00.850 "params": { 00:18:00.850 "name": "malloc0", 00:18:00.850 "num_blocks": 8192, 00:18:00.850 "block_size": 4096, 00:18:00.850 "physical_block_size": 4096, 00:18:00.850 "uuid": "69478be1-91b7-43ec-b06b-853eae8c8806", 00:18:00.850 "optimal_io_boundary": 0 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "bdev_wait_for_examine" 00:18:00.850 } 00:18:00.850 ] 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "subsystem": "nbd", 00:18:00.850 "config": [] 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "subsystem": "scheduler", 00:18:00.850 "config": [ 00:18:00.850 { 00:18:00.850 "method": "framework_set_scheduler", 00:18:00.850 "params": { 00:18:00.850 "name": "static" 00:18:00.850 } 00:18:00.850 } 00:18:00.850 ] 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "subsystem": "nvmf", 00:18:00.850 "config": [ 00:18:00.850 { 00:18:00.850 "method": "nvmf_set_config", 00:18:00.850 "params": { 00:18:00.850 "discovery_filter": "match_any", 00:18:00.850 "admin_cmd_passthru": { 00:18:00.850 "identify_ctrlr": false 00:18:00.850 } 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "nvmf_set_max_subsystems", 00:18:00.850 "params": { 00:18:00.850 "max_subsystems": 1024 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "nvmf_set_crdt", 00:18:00.850 "params": { 00:18:00.850 "crdt1": 0, 00:18:00.850 "crdt2": 0, 00:18:00.850 "crdt3": 0 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "nvmf_create_transport", 00:18:00.850 "params": { 00:18:00.850 "trtype": "TCP", 00:18:00.850 "max_queue_depth": 128, 00:18:00.850 "max_io_qpairs_per_ctrlr": 127, 00:18:00.850 "in_capsule_data_size": 4096, 00:18:00.850 "max_io_size": 131072, 00:18:00.850 "io_unit_size": 131072, 00:18:00.850 "max_aq_depth": 128, 00:18:00.850 "num_shared_buffers": 511, 00:18:00.850 "buf_cache_size": 4294967295, 00:18:00.850 "dif_insert_or_strip": false, 00:18:00.850 "zcopy": false, 00:18:00.850 "c2h_success": false, 00:18:00.850 "sock_priority": 0, 00:18:00.850 "abort_timeout_sec": 1, 00:18:00.850 "ack_timeout": 0, 00:18:00.850 "data_wr_pool_size": 0 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "nvmf_create_subsystem", 00:18:00.850 "params": { 00:18:00.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.850 "allow_any_host": false, 00:18:00.850 "serial_number": "SPDK00000000000001", 00:18:00.850 "model_number": "SPDK bdev Controller", 00:18:00.850 "max_namespaces": 10, 00:18:00.850 "min_cntlid": 1, 00:18:00.850 "max_cntlid": 65519, 00:18:00.850 "ana_reporting": false 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "nvmf_subsystem_add_host", 00:18:00.850 "params": { 00:18:00.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.850 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.850 "psk": "/tmp/tmp.IMVN0fzbqL" 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "nvmf_subsystem_add_ns", 00:18:00.850 "params": { 00:18:00.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.850 "namespace": { 00:18:00.850 "nsid": 1, 00:18:00.850 "bdev_name": "malloc0", 00:18:00.850 "nguid": "69478BE191B743ECB06B853EAE8C8806", 00:18:00.850 "uuid": "69478be1-91b7-43ec-b06b-853eae8c8806", 00:18:00.850 "no_auto_visible": false 00:18:00.850 } 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "method": "nvmf_subsystem_add_listener", 00:18:00.850 "params": { 00:18:00.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.850 "listen_address": { 00:18:00.850 "trtype": "TCP", 00:18:00.850 "adrfam": "IPv4", 00:18:00.850 "traddr": "10.0.0.2", 00:18:00.850 "trsvcid": "4420" 00:18:00.850 }, 00:18:00.850 "secure_channel": true 00:18:00.850 } 00:18:00.850 } 00:18:00.850 ] 00:18:00.850 } 00:18:00.850 ] 00:18:00.850 }' 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3855645 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3855645 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3855645 ']' 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.850 13:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.850 [2024-07-15 13:10:22.298000] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:00.850 [2024-07-15 13:10:22.298097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.851 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.851 [2024-07-15 13:10:22.360324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.851 [2024-07-15 13:10:22.467435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.851 [2024-07-15 13:10:22.467488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.851 [2024-07-15 13:10:22.467515] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.851 [2024-07-15 13:10:22.467528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.851 [2024-07-15 13:10:22.467540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.851 [2024-07-15 13:10:22.467627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.109 [2024-07-15 13:10:22.707561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.109 [2024-07-15 13:10:22.723503] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:01.109 [2024-07-15 13:10:22.739570] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.109 [2024-07-15 13:10:22.747115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3855793 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3855793 /var/tmp/bdevperf.sock 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3855793 ']' 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.683 13:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:01.683 "subsystems": [ 00:18:01.683 { 00:18:01.683 "subsystem": "keyring", 00:18:01.683 "config": [] 00:18:01.683 }, 00:18:01.683 { 00:18:01.683 "subsystem": "iobuf", 00:18:01.683 "config": [ 00:18:01.683 { 00:18:01.683 "method": "iobuf_set_options", 00:18:01.683 "params": { 00:18:01.683 "small_pool_count": 8192, 00:18:01.683 "large_pool_count": 1024, 00:18:01.683 "small_bufsize": 8192, 00:18:01.683 "large_bufsize": 135168 00:18:01.683 } 00:18:01.683 } 00:18:01.683 ] 00:18:01.683 }, 00:18:01.683 { 00:18:01.683 "subsystem": "sock", 00:18:01.683 "config": [ 00:18:01.683 { 00:18:01.683 "method": "sock_set_default_impl", 00:18:01.683 "params": { 00:18:01.683 "impl_name": "posix" 00:18:01.683 } 00:18:01.683 }, 00:18:01.683 { 00:18:01.683 "method": "sock_impl_set_options", 00:18:01.683 "params": { 00:18:01.683 "impl_name": "ssl", 00:18:01.683 "recv_buf_size": 4096, 00:18:01.683 "send_buf_size": 4096, 00:18:01.683 "enable_recv_pipe": true, 00:18:01.683 "enable_quickack": false, 00:18:01.683 "enable_placement_id": 0, 00:18:01.683 "enable_zerocopy_send_server": true, 00:18:01.683 "enable_zerocopy_send_client": false, 00:18:01.683 "zerocopy_threshold": 0, 00:18:01.683 "tls_version": 0, 00:18:01.683 "enable_ktls": false 00:18:01.683 } 00:18:01.683 }, 00:18:01.683 { 00:18:01.683 "method": "sock_impl_set_options", 00:18:01.683 "params": { 00:18:01.683 "impl_name": "posix", 00:18:01.683 "recv_buf_size": 2097152, 00:18:01.683 "send_buf_size": 2097152, 00:18:01.683 "enable_recv_pipe": true, 00:18:01.683 "enable_quickack": false, 00:18:01.683 "enable_placement_id": 0, 00:18:01.683 "enable_zerocopy_send_server": true, 00:18:01.683 "enable_zerocopy_send_client": false, 00:18:01.683 "zerocopy_threshold": 0, 00:18:01.683 "tls_version": 0, 00:18:01.683 "enable_ktls": false 00:18:01.683 } 00:18:01.683 } 00:18:01.683 ] 00:18:01.683 }, 00:18:01.683 { 00:18:01.683 "subsystem": "vmd", 00:18:01.683 "config": [] 00:18:01.683 }, 00:18:01.683 { 00:18:01.683 "subsystem": "accel", 00:18:01.683 "config": [ 00:18:01.683 { 00:18:01.683 "method": "accel_set_options", 00:18:01.683 "params": { 00:18:01.683 "small_cache_size": 128, 00:18:01.683 "large_cache_size": 16, 00:18:01.683 "task_count": 2048, 00:18:01.683 "sequence_count": 2048, 00:18:01.683 "buf_count": 2048 00:18:01.683 } 00:18:01.683 } 00:18:01.683 ] 00:18:01.683 }, 00:18:01.683 { 00:18:01.683 "subsystem": "bdev", 00:18:01.683 "config": [ 00:18:01.683 { 00:18:01.683 "method": "bdev_set_options", 00:18:01.683 "params": { 00:18:01.683 "bdev_io_pool_size": 65535, 00:18:01.683 "bdev_io_cache_size": 256, 00:18:01.683 "bdev_auto_examine": true, 00:18:01.683 "iobuf_small_cache_size": 128, 00:18:01.683 "iobuf_large_cache_size": 16 00:18:01.683 } 00:18:01.683 }, 00:18:01.683 { 00:18:01.684 "method": "bdev_raid_set_options", 00:18:01.684 "params": { 00:18:01.684 "process_window_size_kb": 1024 00:18:01.684 } 00:18:01.684 }, 00:18:01.684 { 00:18:01.684 "method": "bdev_iscsi_set_options", 00:18:01.684 "params": { 00:18:01.684 "timeout_sec": 30 00:18:01.684 } 00:18:01.684 }, 00:18:01.684 { 00:18:01.684 "method": "bdev_nvme_set_options", 00:18:01.684 "params": { 00:18:01.684 "action_on_timeout": "none", 00:18:01.684 "timeout_us": 0, 00:18:01.684 "timeout_admin_us": 0, 00:18:01.684 "keep_alive_timeout_ms": 10000, 00:18:01.684 "arbitration_burst": 0, 00:18:01.684 "low_priority_weight": 0, 00:18:01.684 "medium_priority_weight": 0, 00:18:01.684 "high_priority_weight": 0, 00:18:01.684 "nvme_adminq_poll_period_us": 10000, 00:18:01.684 "nvme_ioq_poll_period_us": 0, 00:18:01.684 "io_queue_requests": 512, 00:18:01.684 "delay_cmd_submit": true, 00:18:01.684 "transport_retry_count": 4, 00:18:01.684 "bdev_retry_count": 3, 00:18:01.684 "transport_ack_timeout": 0, 00:18:01.684 "ctrlr_loss_timeout_sec": 0, 00:18:01.684 "reconnect_delay_sec": 0, 00:18:01.684 "fast_io_fail_timeout_sec": 0, 00:18:01.684 "disable_auto_failback": false, 00:18:01.684 "generate_uuids": false, 00:18:01.684 "transport_tos": 0, 00:18:01.684 "nvme_error_stat": false, 00:18:01.684 "rdma_srq_size": 0, 00:18:01.684 "io_path_stat": false, 00:18:01.684 "allow_accel_sequence": false, 00:18:01.684 "rdma_max_cq_size": 0, 00:18:01.684 "rdma_cm_event_timeout_ms": 0, 00:18:01.684 "dhchap_digests": [ 00:18:01.684 "sha256", 00:18:01.684 "sha384", 00:18:01.684 "sha512" 00:18:01.684 ], 00:18:01.684 "dhchap_dhgroups": [ 00:18:01.684 "null", 00:18:01.684 "ffdhe2048", 00:18:01.684 "ffdhe3072", 00:18:01.684 "ffdhe4096", 00:18:01.684 "ffdhe6144", 00:18:01.684 "ffdhe8192" 00:18:01.684 ] 00:18:01.684 } 00:18:01.684 }, 00:18:01.684 { 00:18:01.684 "method": "bdev_nvme_attach_controller", 00:18:01.684 "params": { 00:18:01.684 "name": "TLSTEST", 00:18:01.684 "trtype": "TCP", 00:18:01.684 "adrfam": "IPv4", 00:18:01.684 "traddr": "10.0.0.2", 00:18:01.684 "trsvcid": "4420", 00:18:01.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.684 "prchk_reftag": false, 00:18:01.684 "prchk_guard": false, 00:18:01.684 "ctrlr_loss_timeout_sec": 0, 00:18:01.684 "reconnect_delay_sec": 0, 00:18:01.684 "fast_io_fail_timeout_sec": 0, 00:18:01.684 "psk": "/tmp/tmp.IMVN0fzbqL", 00:18:01.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.684 "hdgst": false, 00:18:01.684 "ddgst": false 00:18:01.684 } 00:18:01.684 }, 00:18:01.684 { 00:18:01.684 "method": "bdev_nvme_set_hotplug", 00:18:01.684 "params": { 00:18:01.684 "period_us": 100000, 00:18:01.684 "enable": false 00:18:01.684 } 00:18:01.684 }, 00:18:01.684 { 00:18:01.684 "method": "bdev_wait_for_examine" 00:18:01.684 } 00:18:01.684 ] 00:18:01.684 }, 00:18:01.684 { 00:18:01.684 "subsystem": "nbd", 00:18:01.684 "config": [] 00:18:01.684 } 00:18:01.684 ] 00:18:01.684 }' 00:18:01.684 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.684 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.684 13:10:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.684 [2024-07-15 13:10:23.322538] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:01.685 [2024-07-15 13:10:23.322626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855793 ] 00:18:01.685 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.685 [2024-07-15 13:10:23.380355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.949 [2024-07-15 13:10:23.488343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.207 [2024-07-15 13:10:23.663443] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.207 [2024-07-15 13:10:23.663551] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.773 13:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.773 13:10:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:02.773 13:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:02.773 Running I/O for 10 seconds... 00:18:14.964 00:18:14.964 Latency(us) 00:18:14.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.964 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:14.964 Verification LBA range: start 0x0 length 0x2000 00:18:14.964 TLSTESTn1 : 10.04 2740.80 10.71 0.00 0.00 46583.40 6213.78 75342.13 00:18:14.964 =================================================================================================================== 00:18:14.964 Total : 2740.80 10.71 0.00 0.00 46583.40 6213.78 75342.13 00:18:14.964 { 00:18:14.964 "core_count": 1, 00:18:14.964 "test_results": [ 00:18:14.964 { 00:18:14.964 "job": "TLSTESTn1", 00:18:14.964 "test_status": "finished", 00:18:14.964 "core_mask": "0x4", 00:18:14.964 "workload": "verify", 00:18:14.964 "verify_LBA_range_start": 0, 00:18:14.964 "verify_LBA_range_len": 8192, 00:18:14.964 "queue_depth": 128, 00:18:14.964 "io_size": 4096, 00:18:14.964 "runtime": 10.04416561126709, 00:18:14.964 "io_per_second": 2740.7950047818804, 00:18:14.964 "MiB_per_second": 10.70623048742922, 00:18:14.964 "fails_per_second": 0.0, 00:18:14.964 "timeout_per_second": 0.0, 00:18:14.964 "average_latency_us": 46583.40087595169, 00:18:14.964 "min_latency_us": 6213.783703703703, 00:18:14.964 "max_latency_us": 75342.1274074074 00:18:14.964 } 00:18:14.964 ] 00:18:14.964 } 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3855793 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3855793 ']' 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3855793 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855793 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855793' 00:18:14.964 killing process with pid 3855793 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3855793 00:18:14.964 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.964 00:18:14.964 Latency(us) 00:18:14.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.964 =================================================================================================================== 00:18:14.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.964 [2024-07-15 13:10:34.517033] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3855793 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3855645 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3855645 ']' 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3855645 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855645 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855645' 00:18:14.964 killing process with pid 3855645 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3855645 00:18:14.964 [2024-07-15 13:10:34.802815] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:14.964 13:10:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3855645 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3857121 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3857121 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3857121 ']' 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.964 13:10:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.964 [2024-07-15 13:10:35.162469] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:14.964 [2024-07-15 13:10:35.162574] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.964 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.964 [2024-07-15 13:10:35.232020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.964 [2024-07-15 13:10:35.344337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.964 [2024-07-15 13:10:35.344414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.964 [2024-07-15 13:10:35.344430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.964 [2024-07-15 13:10:35.344443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.964 [2024-07-15 13:10:35.344454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.964 [2024-07-15 13:10:35.344493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IMVN0fzbqL 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMVN0fzbqL 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:14.964 [2024-07-15 13:10:36.364989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:14.964 13:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.249 [2024-07-15 13:10:36.858276] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.249 [2024-07-15 13:10:36.858518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.249 13:10:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.507 malloc0 00:18:15.507 13:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.765 13:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMVN0fzbqL 00:18:16.023 [2024-07-15 13:10:37.603738] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3857420 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3857420 /var/tmp/bdevperf.sock 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3857420 ']' 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.023 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.023 [2024-07-15 13:10:37.666169] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:16.023 [2024-07-15 13:10:37.666267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857420 ] 00:18:16.023 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.281 [2024-07-15 13:10:37.726208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.281 [2024-07-15 13:10:37.834336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.281 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.281 13:10:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.281 13:10:37 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IMVN0fzbqL 00:18:16.539 13:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:16.797 [2024-07-15 13:10:38.431305] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.055 nvme0n1 00:18:17.055 13:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.055 Running I/O for 1 seconds... 00:18:17.989 00:18:17.989 Latency(us) 00:18:17.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.989 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:17.989 Verification LBA range: start 0x0 length 0x2000 00:18:17.989 nvme0n1 : 1.04 2398.56 9.37 0.00 0.00 52337.66 10582.85 77672.30 00:18:17.989 =================================================================================================================== 00:18:17.989 Total : 2398.56 9.37 0.00 0.00 52337.66 10582.85 77672.30 00:18:17.989 { 00:18:17.989 "core_count": 1, 00:18:17.989 "test_results": [ 00:18:17.989 { 00:18:17.989 "job": "nvme0n1", 00:18:17.989 "test_status": "finished", 00:18:17.989 "core_mask": "0x2", 00:18:17.989 "workload": "verify", 00:18:17.989 "verify_LBA_range_start": 0, 00:18:17.989 "verify_LBA_range_len": 8192, 00:18:17.989 "queue_depth": 128, 00:18:17.989 "io_size": 4096, 00:18:17.989 "runtime": 1.044374942779541, 00:18:17.989 "io_per_second": 2398.563734290844, 00:18:17.989 "MiB_per_second": 9.36938958707361, 00:18:17.989 "fails_per_second": 0.0, 00:18:17.989 "timeout_per_second": 0.0, 00:18:17.989 "average_latency_us": 52337.65907947069, 00:18:17.989 "min_latency_us": 10582.85037037037, 00:18:17.989 "max_latency_us": 77672.29629629629 00:18:17.989 } 00:18:17.989 ] 00:18:17.989 } 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3857420 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3857420 ']' 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3857420 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3857420 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3857420' 00:18:18.247 killing process with pid 3857420 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3857420 00:18:18.247 Received shutdown signal, test time was about 1.000000 seconds 00:18:18.247 00:18:18.247 Latency(us) 00:18:18.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.247 =================================================================================================================== 00:18:18.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.247 13:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3857420 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3857121 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3857121 ']' 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3857121 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3857121 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3857121' 00:18:18.504 killing process with pid 3857121 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3857121 00:18:18.504 [2024-07-15 13:10:40.036653] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:18.504 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3857121 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3857817 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3857817 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3857817 ']' 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.762 13:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.762 [2024-07-15 13:10:40.388450] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:18.762 [2024-07-15 13:10:40.388540] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.762 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.762 [2024-07-15 13:10:40.461762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.020 [2024-07-15 13:10:40.576407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.020 [2024-07-15 13:10:40.576470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.020 [2024-07-15 13:10:40.576497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.020 [2024-07-15 13:10:40.576511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.020 [2024-07-15 13:10:40.576530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.020 [2024-07-15 13:10:40.576559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.953 [2024-07-15 13:10:41.358056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.953 malloc0 00:18:19.953 [2024-07-15 13:10:41.389727] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.953 [2024-07-15 13:10:41.390017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3857969 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3857969 /var/tmp/bdevperf.sock 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3857969 ']' 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.953 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.953 [2024-07-15 13:10:41.457387] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:19.953 [2024-07-15 13:10:41.457450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857969 ] 00:18:19.953 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.953 [2024-07-15 13:10:41.519505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.953 [2024-07-15 13:10:41.635383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.211 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.211 13:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.211 13:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IMVN0fzbqL 00:18:20.469 13:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:20.728 [2024-07-15 13:10:42.307854] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.728 nvme0n1 00:18:20.728 13:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.988 Running I/O for 1 seconds... 00:18:21.921 00:18:21.921 Latency(us) 00:18:21.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.921 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:21.921 Verification LBA range: start 0x0 length 0x2000 00:18:21.921 nvme0n1 : 1.03 1722.04 6.73 0.00 0.00 73320.11 11311.03 82721.00 00:18:21.921 =================================================================================================================== 00:18:21.921 Total : 1722.04 6.73 0.00 0.00 73320.11 11311.03 82721.00 00:18:21.921 { 00:18:21.921 "core_count": 1, 00:18:21.921 "test_results": [ 00:18:21.921 { 00:18:21.921 "job": "nvme0n1", 00:18:21.921 "test_status": "finished", 00:18:21.922 "core_mask": "0x2", 00:18:21.922 "workload": "verify", 00:18:21.922 "verify_LBA_range_start": 0, 00:18:21.922 "verify_LBA_range_len": 8192, 00:18:21.922 "queue_depth": 128, 00:18:21.922 "io_size": 4096, 00:18:21.922 "runtime": 1.0313340425491333, 00:18:21.922 "io_per_second": 1722.0415500701033, 00:18:21.922 "MiB_per_second": 6.726724804961341, 00:18:21.922 "fails_per_second": 0.0, 00:18:21.922 "timeout_per_second": 0.0, 00:18:21.922 "average_latency_us": 73320.1083750417, 00:18:21.922 "min_latency_us": 11311.028148148149, 00:18:21.922 "max_latency_us": 82720.99555555555 00:18:21.922 } 00:18:21.922 ] 00:18:21.922 } 00:18:21.922 13:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:21.922 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.922 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.179 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.179 13:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:22.179 "subsystems": [ 00:18:22.179 { 00:18:22.179 "subsystem": "keyring", 00:18:22.179 "config": [ 00:18:22.179 { 00:18:22.179 "method": "keyring_file_add_key", 00:18:22.179 "params": { 00:18:22.179 "name": "key0", 00:18:22.179 "path": "/tmp/tmp.IMVN0fzbqL" 00:18:22.179 } 00:18:22.179 } 00:18:22.179 ] 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "subsystem": "iobuf", 00:18:22.179 "config": [ 00:18:22.179 { 00:18:22.179 "method": "iobuf_set_options", 00:18:22.179 "params": { 00:18:22.179 "small_pool_count": 8192, 00:18:22.179 "large_pool_count": 1024, 00:18:22.179 "small_bufsize": 8192, 00:18:22.179 "large_bufsize": 135168 00:18:22.179 } 00:18:22.179 } 00:18:22.179 ] 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "subsystem": "sock", 00:18:22.179 "config": [ 00:18:22.179 { 00:18:22.179 "method": "sock_set_default_impl", 00:18:22.179 "params": { 00:18:22.179 "impl_name": "posix" 00:18:22.179 } 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "method": "sock_impl_set_options", 00:18:22.179 "params": { 00:18:22.179 "impl_name": "ssl", 00:18:22.179 "recv_buf_size": 4096, 00:18:22.179 "send_buf_size": 4096, 00:18:22.179 "enable_recv_pipe": true, 00:18:22.179 "enable_quickack": false, 00:18:22.179 "enable_placement_id": 0, 00:18:22.179 "enable_zerocopy_send_server": true, 00:18:22.179 "enable_zerocopy_send_client": false, 00:18:22.179 "zerocopy_threshold": 0, 00:18:22.179 "tls_version": 0, 00:18:22.179 "enable_ktls": false 00:18:22.179 } 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "method": "sock_impl_set_options", 00:18:22.179 "params": { 00:18:22.179 "impl_name": "posix", 00:18:22.179 "recv_buf_size": 2097152, 00:18:22.179 "send_buf_size": 2097152, 00:18:22.179 "enable_recv_pipe": true, 00:18:22.179 "enable_quickack": false, 00:18:22.179 "enable_placement_id": 0, 00:18:22.179 "enable_zerocopy_send_server": true, 00:18:22.179 "enable_zerocopy_send_client": false, 00:18:22.179 "zerocopy_threshold": 0, 00:18:22.179 "tls_version": 0, 00:18:22.179 "enable_ktls": false 00:18:22.179 } 00:18:22.179 } 00:18:22.179 ] 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "subsystem": "vmd", 00:18:22.179 "config": [] 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "subsystem": "accel", 00:18:22.179 "config": [ 00:18:22.179 { 00:18:22.179 "method": "accel_set_options", 00:18:22.179 "params": { 00:18:22.179 "small_cache_size": 128, 00:18:22.179 "large_cache_size": 16, 00:18:22.179 "task_count": 2048, 00:18:22.179 "sequence_count": 2048, 00:18:22.179 "buf_count": 2048 00:18:22.179 } 00:18:22.179 } 00:18:22.179 ] 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "subsystem": "bdev", 00:18:22.179 "config": [ 00:18:22.179 { 00:18:22.179 "method": "bdev_set_options", 00:18:22.179 "params": { 00:18:22.179 "bdev_io_pool_size": 65535, 00:18:22.179 "bdev_io_cache_size": 256, 00:18:22.179 "bdev_auto_examine": true, 00:18:22.179 "iobuf_small_cache_size": 128, 00:18:22.179 "iobuf_large_cache_size": 16 00:18:22.179 } 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "method": "bdev_raid_set_options", 00:18:22.179 "params": { 00:18:22.179 "process_window_size_kb": 1024 00:18:22.179 } 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "method": "bdev_iscsi_set_options", 00:18:22.179 "params": { 00:18:22.179 "timeout_sec": 30 00:18:22.179 } 00:18:22.179 }, 00:18:22.179 { 00:18:22.179 "method": "bdev_nvme_set_options", 00:18:22.179 "params": { 00:18:22.179 "action_on_timeout": "none", 00:18:22.179 "timeout_us": 0, 00:18:22.179 "timeout_admin_us": 0, 00:18:22.179 "keep_alive_timeout_ms": 10000, 00:18:22.179 "arbitration_burst": 0, 00:18:22.179 "low_priority_weight": 0, 00:18:22.179 "medium_priority_weight": 0, 00:18:22.179 "high_priority_weight": 0, 00:18:22.179 "nvme_adminq_poll_period_us": 10000, 00:18:22.179 "nvme_ioq_poll_period_us": 0, 00:18:22.179 "io_queue_requests": 0, 00:18:22.179 "delay_cmd_submit": true, 00:18:22.179 "transport_retry_count": 4, 00:18:22.179 "bdev_retry_count": 3, 00:18:22.179 "transport_ack_timeout": 0, 00:18:22.179 "ctrlr_loss_timeout_sec": 0, 00:18:22.179 "reconnect_delay_sec": 0, 00:18:22.179 "fast_io_fail_timeout_sec": 0, 00:18:22.179 "disable_auto_failback": false, 00:18:22.179 "generate_uuids": false, 00:18:22.179 "transport_tos": 0, 00:18:22.179 "nvme_error_stat": false, 00:18:22.179 "rdma_srq_size": 0, 00:18:22.179 "io_path_stat": false, 00:18:22.179 "allow_accel_sequence": false, 00:18:22.179 "rdma_max_cq_size": 0, 00:18:22.179 "rdma_cm_event_timeout_ms": 0, 00:18:22.180 "dhchap_digests": [ 00:18:22.180 "sha256", 00:18:22.180 "sha384", 00:18:22.180 "sha512" 00:18:22.180 ], 00:18:22.180 "dhchap_dhgroups": [ 00:18:22.180 "null", 00:18:22.180 "ffdhe2048", 00:18:22.180 "ffdhe3072", 00:18:22.180 "ffdhe4096", 00:18:22.180 "ffdhe6144", 00:18:22.180 "ffdhe8192" 00:18:22.180 ] 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "bdev_nvme_set_hotplug", 00:18:22.180 "params": { 00:18:22.180 "period_us": 100000, 00:18:22.180 "enable": false 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "bdev_malloc_create", 00:18:22.180 "params": { 00:18:22.180 "name": "malloc0", 00:18:22.180 "num_blocks": 8192, 00:18:22.180 "block_size": 4096, 00:18:22.180 "physical_block_size": 4096, 00:18:22.180 "uuid": "39521bd5-8850-44fc-a0bb-9d7d640ddd7a", 00:18:22.180 "optimal_io_boundary": 0 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "bdev_wait_for_examine" 00:18:22.180 } 00:18:22.180 ] 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "subsystem": "nbd", 00:18:22.180 "config": [] 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "subsystem": "scheduler", 00:18:22.180 "config": [ 00:18:22.180 { 00:18:22.180 "method": "framework_set_scheduler", 00:18:22.180 "params": { 00:18:22.180 "name": "static" 00:18:22.180 } 00:18:22.180 } 00:18:22.180 ] 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "subsystem": "nvmf", 00:18:22.180 "config": [ 00:18:22.180 { 00:18:22.180 "method": "nvmf_set_config", 00:18:22.180 "params": { 00:18:22.180 "discovery_filter": "match_any", 00:18:22.180 "admin_cmd_passthru": { 00:18:22.180 "identify_ctrlr": false 00:18:22.180 } 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "nvmf_set_max_subsystems", 00:18:22.180 "params": { 00:18:22.180 "max_subsystems": 1024 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "nvmf_set_crdt", 00:18:22.180 "params": { 00:18:22.180 "crdt1": 0, 00:18:22.180 "crdt2": 0, 00:18:22.180 "crdt3": 0 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "nvmf_create_transport", 00:18:22.180 "params": { 00:18:22.180 "trtype": "TCP", 00:18:22.180 "max_queue_depth": 128, 00:18:22.180 "max_io_qpairs_per_ctrlr": 127, 00:18:22.180 "in_capsule_data_size": 4096, 00:18:22.180 "max_io_size": 131072, 00:18:22.180 "io_unit_size": 131072, 00:18:22.180 "max_aq_depth": 128, 00:18:22.180 "num_shared_buffers": 511, 00:18:22.180 "buf_cache_size": 4294967295, 00:18:22.180 "dif_insert_or_strip": false, 00:18:22.180 "zcopy": false, 00:18:22.180 "c2h_success": false, 00:18:22.180 "sock_priority": 0, 00:18:22.180 "abort_timeout_sec": 1, 00:18:22.180 "ack_timeout": 0, 00:18:22.180 "data_wr_pool_size": 0 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "nvmf_create_subsystem", 00:18:22.180 "params": { 00:18:22.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.180 "allow_any_host": false, 00:18:22.180 "serial_number": "00000000000000000000", 00:18:22.180 "model_number": "SPDK bdev Controller", 00:18:22.180 "max_namespaces": 32, 00:18:22.180 "min_cntlid": 1, 00:18:22.180 "max_cntlid": 65519, 00:18:22.180 "ana_reporting": false 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "nvmf_subsystem_add_host", 00:18:22.180 "params": { 00:18:22.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.180 "host": "nqn.2016-06.io.spdk:host1", 00:18:22.180 "psk": "key0" 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "nvmf_subsystem_add_ns", 00:18:22.180 "params": { 00:18:22.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.180 "namespace": { 00:18:22.180 "nsid": 1, 00:18:22.180 "bdev_name": "malloc0", 00:18:22.180 "nguid": "39521BD5885044FCA0BB9D7D640DDD7A", 00:18:22.180 "uuid": "39521bd5-8850-44fc-a0bb-9d7d640ddd7a", 00:18:22.180 "no_auto_visible": false 00:18:22.180 } 00:18:22.180 } 00:18:22.180 }, 00:18:22.180 { 00:18:22.180 "method": "nvmf_subsystem_add_listener", 00:18:22.180 "params": { 00:18:22.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.180 "listen_address": { 00:18:22.180 "trtype": "TCP", 00:18:22.180 "adrfam": "IPv4", 00:18:22.180 "traddr": "10.0.0.2", 00:18:22.180 "trsvcid": "4420" 00:18:22.180 }, 00:18:22.180 "secure_channel": true 00:18:22.180 } 00:18:22.180 } 00:18:22.180 ] 00:18:22.180 } 00:18:22.180 ] 00:18:22.180 }' 00:18:22.180 13:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:22.438 13:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:22.438 "subsystems": [ 00:18:22.438 { 00:18:22.438 "subsystem": "keyring", 00:18:22.438 "config": [ 00:18:22.438 { 00:18:22.438 "method": "keyring_file_add_key", 00:18:22.438 "params": { 00:18:22.438 "name": "key0", 00:18:22.438 "path": "/tmp/tmp.IMVN0fzbqL" 00:18:22.438 } 00:18:22.438 } 00:18:22.438 ] 00:18:22.438 }, 00:18:22.438 { 00:18:22.438 "subsystem": "iobuf", 00:18:22.438 "config": [ 00:18:22.438 { 00:18:22.438 "method": "iobuf_set_options", 00:18:22.438 "params": { 00:18:22.438 "small_pool_count": 8192, 00:18:22.438 "large_pool_count": 1024, 00:18:22.438 "small_bufsize": 8192, 00:18:22.438 "large_bufsize": 135168 00:18:22.438 } 00:18:22.438 } 00:18:22.438 ] 00:18:22.438 }, 00:18:22.438 { 00:18:22.438 "subsystem": "sock", 00:18:22.438 "config": [ 00:18:22.438 { 00:18:22.438 "method": "sock_set_default_impl", 00:18:22.438 "params": { 00:18:22.438 "impl_name": "posix" 00:18:22.438 } 00:18:22.438 }, 00:18:22.438 { 00:18:22.438 "method": "sock_impl_set_options", 00:18:22.438 "params": { 00:18:22.438 "impl_name": "ssl", 00:18:22.438 "recv_buf_size": 4096, 00:18:22.438 "send_buf_size": 4096, 00:18:22.438 "enable_recv_pipe": true, 00:18:22.438 "enable_quickack": false, 00:18:22.438 "enable_placement_id": 0, 00:18:22.438 "enable_zerocopy_send_server": true, 00:18:22.438 "enable_zerocopy_send_client": false, 00:18:22.438 "zerocopy_threshold": 0, 00:18:22.438 "tls_version": 0, 00:18:22.438 "enable_ktls": false 00:18:22.438 } 00:18:22.438 }, 00:18:22.438 { 00:18:22.438 "method": "sock_impl_set_options", 00:18:22.438 "params": { 00:18:22.438 "impl_name": "posix", 00:18:22.438 "recv_buf_size": 2097152, 00:18:22.438 "send_buf_size": 2097152, 00:18:22.438 "enable_recv_pipe": true, 00:18:22.438 "enable_quickack": false, 00:18:22.438 "enable_placement_id": 0, 00:18:22.438 "enable_zerocopy_send_server": true, 00:18:22.438 "enable_zerocopy_send_client": false, 00:18:22.438 "zerocopy_threshold": 0, 00:18:22.438 "tls_version": 0, 00:18:22.438 "enable_ktls": false 00:18:22.438 } 00:18:22.438 } 00:18:22.438 ] 00:18:22.438 }, 00:18:22.438 { 00:18:22.438 "subsystem": "vmd", 00:18:22.438 "config": [] 00:18:22.438 }, 00:18:22.438 { 00:18:22.438 "subsystem": "accel", 00:18:22.438 "config": [ 00:18:22.438 { 00:18:22.438 "method": "accel_set_options", 00:18:22.438 "params": { 00:18:22.438 "small_cache_size": 128, 00:18:22.438 "large_cache_size": 16, 00:18:22.438 "task_count": 2048, 00:18:22.438 "sequence_count": 2048, 00:18:22.438 "buf_count": 2048 00:18:22.438 } 00:18:22.438 } 00:18:22.438 ] 00:18:22.438 }, 00:18:22.438 { 00:18:22.438 "subsystem": "bdev", 00:18:22.438 "config": [ 00:18:22.438 { 00:18:22.438 "method": "bdev_set_options", 00:18:22.438 "params": { 00:18:22.438 "bdev_io_pool_size": 65535, 00:18:22.439 "bdev_io_cache_size": 256, 00:18:22.439 "bdev_auto_examine": true, 00:18:22.439 "iobuf_small_cache_size": 128, 00:18:22.439 "iobuf_large_cache_size": 16 00:18:22.439 } 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "method": "bdev_raid_set_options", 00:18:22.439 "params": { 00:18:22.439 "process_window_size_kb": 1024 00:18:22.439 } 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "method": "bdev_iscsi_set_options", 00:18:22.439 "params": { 00:18:22.439 "timeout_sec": 30 00:18:22.439 } 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "method": "bdev_nvme_set_options", 00:18:22.439 "params": { 00:18:22.439 "action_on_timeout": "none", 00:18:22.439 "timeout_us": 0, 00:18:22.439 "timeout_admin_us": 0, 00:18:22.439 "keep_alive_timeout_ms": 10000, 00:18:22.439 "arbitration_burst": 0, 00:18:22.439 "low_priority_weight": 0, 00:18:22.439 "medium_priority_weight": 0, 00:18:22.439 "high_priority_weight": 0, 00:18:22.439 "nvme_adminq_poll_period_us": 10000, 00:18:22.439 "nvme_ioq_poll_period_us": 0, 00:18:22.439 "io_queue_requests": 512, 00:18:22.439 "delay_cmd_submit": true, 00:18:22.439 "transport_retry_count": 4, 00:18:22.439 "bdev_retry_count": 3, 00:18:22.439 "transport_ack_timeout": 0, 00:18:22.439 "ctrlr_loss_timeout_sec": 0, 00:18:22.439 "reconnect_delay_sec": 0, 00:18:22.439 "fast_io_fail_timeout_sec": 0, 00:18:22.439 "disable_auto_failback": false, 00:18:22.439 "generate_uuids": false, 00:18:22.439 "transport_tos": 0, 00:18:22.439 "nvme_error_stat": false, 00:18:22.439 "rdma_srq_size": 0, 00:18:22.439 "io_path_stat": false, 00:18:22.439 "allow_accel_sequence": false, 00:18:22.439 "rdma_max_cq_size": 0, 00:18:22.439 "rdma_cm_event_timeout_ms": 0, 00:18:22.439 "dhchap_digests": [ 00:18:22.439 "sha256", 00:18:22.439 "sha384", 00:18:22.439 "sha512" 00:18:22.439 ], 00:18:22.439 "dhchap_dhgroups": [ 00:18:22.439 "null", 00:18:22.439 "ffdhe2048", 00:18:22.439 "ffdhe3072", 00:18:22.439 "ffdhe4096", 00:18:22.439 "ffdhe6144", 00:18:22.439 "ffdhe8192" 00:18:22.439 ] 00:18:22.439 } 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "method": "bdev_nvme_attach_controller", 00:18:22.439 "params": { 00:18:22.439 "name": "nvme0", 00:18:22.439 "trtype": "TCP", 00:18:22.439 "adrfam": "IPv4", 00:18:22.439 "traddr": "10.0.0.2", 00:18:22.439 "trsvcid": "4420", 00:18:22.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.439 "prchk_reftag": false, 00:18:22.439 "prchk_guard": false, 00:18:22.439 "ctrlr_loss_timeout_sec": 0, 00:18:22.439 "reconnect_delay_sec": 0, 00:18:22.439 "fast_io_fail_timeout_sec": 0, 00:18:22.439 "psk": "key0", 00:18:22.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.439 "hdgst": false, 00:18:22.439 "ddgst": false 00:18:22.439 } 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "method": "bdev_nvme_set_hotplug", 00:18:22.439 "params": { 00:18:22.439 "period_us": 100000, 00:18:22.439 "enable": false 00:18:22.439 } 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "method": "bdev_enable_histogram", 00:18:22.439 "params": { 00:18:22.439 "name": "nvme0n1", 00:18:22.439 "enable": true 00:18:22.439 } 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "method": "bdev_wait_for_examine" 00:18:22.439 } 00:18:22.439 ] 00:18:22.439 }, 00:18:22.439 { 00:18:22.439 "subsystem": "nbd", 00:18:22.439 "config": [] 00:18:22.439 } 00:18:22.439 ] 00:18:22.439 }' 00:18:22.439 13:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3857969 00:18:22.439 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3857969 ']' 00:18:22.439 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3857969 00:18:22.439 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.439 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.439 13:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3857969 00:18:22.439 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.439 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.439 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3857969' 00:18:22.439 killing process with pid 3857969 00:18:22.439 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3857969 00:18:22.439 Received shutdown signal, test time was about 1.000000 seconds 00:18:22.439 00:18:22.439 Latency(us) 00:18:22.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.439 =================================================================================================================== 00:18:22.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.439 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3857969 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3857817 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3857817 ']' 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3857817 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3857817 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3857817' 00:18:22.697 killing process with pid 3857817 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3857817 00:18:22.697 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3857817 00:18:22.955 13:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:22.955 13:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.955 13:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:22.955 "subsystems": [ 00:18:22.955 { 00:18:22.955 "subsystem": "keyring", 00:18:22.955 "config": [ 00:18:22.955 { 00:18:22.955 "method": "keyring_file_add_key", 00:18:22.955 "params": { 00:18:22.955 "name": "key0", 00:18:22.955 "path": "/tmp/tmp.IMVN0fzbqL" 00:18:22.955 } 00:18:22.955 } 00:18:22.955 ] 00:18:22.955 }, 00:18:22.955 { 00:18:22.955 "subsystem": "iobuf", 00:18:22.955 "config": [ 00:18:22.955 { 00:18:22.955 "method": "iobuf_set_options", 00:18:22.955 "params": { 00:18:22.955 "small_pool_count": 8192, 00:18:22.955 "large_pool_count": 1024, 00:18:22.955 "small_bufsize": 8192, 00:18:22.955 "large_bufsize": 135168 00:18:22.955 } 00:18:22.955 } 00:18:22.955 ] 00:18:22.955 }, 00:18:22.955 { 00:18:22.955 "subsystem": "sock", 00:18:22.956 "config": [ 00:18:22.956 { 00:18:22.956 "method": "sock_set_default_impl", 00:18:22.956 "params": { 00:18:22.956 "impl_name": "posix" 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "sock_impl_set_options", 00:18:22.956 "params": { 00:18:22.956 "impl_name": "ssl", 00:18:22.956 "recv_buf_size": 4096, 00:18:22.956 "send_buf_size": 4096, 00:18:22.956 "enable_recv_pipe": true, 00:18:22.956 "enable_quickack": false, 00:18:22.956 "enable_placement_id": 0, 00:18:22.956 "enable_zerocopy_send_server": true, 00:18:22.956 "enable_zerocopy_send_client": false, 00:18:22.956 "zerocopy_threshold": 0, 00:18:22.956 "tls_version": 0, 00:18:22.956 "enable_ktls": false 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "sock_impl_set_options", 00:18:22.956 "params": { 00:18:22.956 "impl_name": "posix", 00:18:22.956 "recv_buf_size": 2097152, 00:18:22.956 "send_buf_size": 2097152, 00:18:22.956 "enable_recv_pipe": true, 00:18:22.956 "enable_quickack": false, 00:18:22.956 "enable_placement_id": 0, 00:18:22.956 "enable_zerocopy_send_server": true, 00:18:22.956 "enable_zerocopy_send_client": false, 00:18:22.956 "zerocopy_threshold": 0, 00:18:22.956 "tls_version": 0, 00:18:22.956 "enable_ktls": false 00:18:22.956 } 00:18:22.956 } 00:18:22.956 ] 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "subsystem": "vmd", 00:18:22.956 "config": [] 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "subsystem": "accel", 00:18:22.956 "config": [ 00:18:22.956 { 00:18:22.956 "method": "accel_set_options", 00:18:22.956 "params": { 00:18:22.956 "small_cache_size": 128, 00:18:22.956 "large_cache_size": 16, 00:18:22.956 "task_count": 2048, 00:18:22.956 "sequence_count": 2048, 00:18:22.956 "buf_count": 2048 00:18:22.956 } 00:18:22.956 } 00:18:22.956 ] 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "subsystem": "bdev", 00:18:22.956 "config": [ 00:18:22.956 { 00:18:22.956 "method": "bdev_set_options", 00:18:22.956 "params": { 00:18:22.956 "bdev_io_pool_size": 65535, 00:18:22.956 "bdev_io_cache_size": 256, 00:18:22.956 "bdev_auto_examine": true, 00:18:22.956 "iobuf_small_cache_size": 128, 00:18:22.956 "iobuf_large_cache_size": 16 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "bdev_raid_set_options", 00:18:22.956 "params": { 00:18:22.956 "process_window_size_kb": 1024 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "bdev_iscsi_set_options", 00:18:22.956 "params": { 00:18:22.956 "timeout_sec": 30 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "bdev_nvme_set_options", 00:18:22.956 "params": { 00:18:22.956 "action_on_timeout": "none", 00:18:22.956 "timeout_us": 0, 00:18:22.956 "timeout_admin_us": 0, 00:18:22.956 "keep_alive_timeout_ms": 10000, 00:18:22.956 "arbitration_burst": 0, 00:18:22.956 "low_priority_weight": 0, 00:18:22.956 "medium_priority_weight": 0, 00:18:22.956 "high_priority_weight": 0, 00:18:22.956 "nvme_adminq_poll_period_us": 10000, 00:18:22.956 "nvme_ioq_poll_period_us": 0, 00:18:22.956 "io_queue_requests": 0, 00:18:22.956 "delay_cmd_submit": true, 00:18:22.956 "transport_retry_count": 4, 00:18:22.956 "bdev_retry_count": 3, 00:18:22.956 "transport_ack_timeout": 0, 00:18:22.956 "ctrlr_loss_timeout_sec": 0, 00:18:22.956 "reconnect_delay_sec": 0, 00:18:22.956 "fast_io_fail_timeout_sec": 0, 00:18:22.956 "disable_auto_failback": false, 00:18:22.956 "generate_uuids": false, 00:18:22.956 "transport_tos": 0, 00:18:22.956 "nvme_error_stat": false, 00:18:22.956 "rdma_srq_size": 0, 00:18:22.956 "io_path_stat": false, 00:18:22.956 "allow_accel_sequence": false, 00:18:22.956 "rdma_max_cq_size": 0, 00:18:22.956 "rdma_cm_event_timeout_ms": 0, 00:18:22.956 "dhchap_digests": [ 00:18:22.956 "sha256", 00:18:22.956 "sha384", 00:18:22.956 "sha512" 00:18:22.956 ], 00:18:22.956 "dhchap_dhgroups": [ 00:18:22.956 "null", 00:18:22.956 "ffdhe2048", 00:18:22.956 "ffdhe3072", 00:18:22.956 "ffdhe4096", 00:18:22.956 "ffdhe6144", 00:18:22.956 "ffdhe8192" 00:18:22.956 ] 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "bdev_nvme_set_hotplug", 00:18:22.956 "params": { 00:18:22.956 "period_us": 100000, 00:18:22.956 "enable": false 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "bdev_malloc_create", 00:18:22.956 "params": { 00:18:22.956 "name": "malloc0", 00:18:22.956 "num_blocks": 8192, 00:18:22.956 "block_size": 4096, 00:18:22.956 "physical_block_size": 4096, 00:18:22.956 "uuid": "39521bd5-8850-44fc-a0bb-9d7d640ddd7a", 00:18:22.956 "optimal_io_boundary": 0 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "bdev_wait_for_examine" 00:18:22.956 } 00:18:22.956 ] 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "subsystem": "nbd", 00:18:22.956 "config": [] 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "subsystem": "scheduler", 00:18:22.956 "config": [ 00:18:22.956 { 00:18:22.956 "method": "framework_set_scheduler", 00:18:22.956 "params": { 00:18:22.956 "name": "static" 00:18:22.956 } 00:18:22.956 } 00:18:22.956 ] 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "subsystem": "nvmf", 00:18:22.956 "config": [ 00:18:22.956 { 00:18:22.956 "method": "nvmf_set_config", 00:18:22.956 "params": { 00:18:22.956 "discovery_filter": "match_any", 00:18:22.956 "admin_cmd_passthru": { 00:18:22.956 "identify_ctrlr": false 00:18:22.956 } 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "nvmf_set_max_subsystems", 00:18:22.956 "params": { 00:18:22.956 "max_subsystems": 1024 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "nvmf_set_crdt", 00:18:22.956 "params": { 00:18:22.956 "crdt1": 0, 00:18:22.956 "crdt2": 0, 00:18:22.956 "crdt3": 0 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "nvmf_create_transport", 00:18:22.956 "params": { 00:18:22.956 "trtype": "TCP", 00:18:22.956 "max_queue_depth": 128, 00:18:22.956 "max_io_qpairs_per_ctrlr": 127, 00:18:22.956 "in_capsule_data_size": 4096, 00:18:22.956 "max_io_size": 131072, 00:18:22.956 "io_unit_size": 131072, 00:18:22.956 "max_aq_depth": 128, 00:18:22.956 "num_shared_buffers": 511, 00:18:22.956 "buf_cache_size": 4294967295, 00:18:22.956 "dif_insert_or_strip": false, 00:18:22.956 "zcopy": false, 00:18:22.956 "c2h_success": false, 00:18:22.956 "sock_priority": 0, 00:18:22.956 "abort_timeout_sec": 1, 00:18:22.956 "ack_timeout": 0, 00:18:22.956 "data_wr_pool_size": 0 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "nvmf_create_subsystem", 00:18:22.956 "params": { 00:18:22.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.956 "allow_any_host": false, 00:18:22.956 "serial_number": "00000000000000000000", 00:18:22.956 "model_number": "SPDK bdev Controller", 00:18:22.956 "max_namespaces": 32, 00:18:22.956 "min_cntlid": 1, 00:18:22.956 "max_cntlid": 65519, 00:18:22.956 "ana_reporting": false 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "nvmf_subsystem_add_host", 00:18:22.956 "params": { 00:18:22.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.956 "host": "nqn.2016-06.io.spdk:host1", 00:18:22.956 "psk": "key0" 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "nvmf_subsystem_add_ns", 00:18:22.956 "params": { 00:18:22.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.956 "namespace": { 00:18:22.956 "nsid": 1, 00:18:22.956 "bdev_name": "malloc0", 00:18:22.956 "nguid": "39521BD5885044FCA0BB9D7D640DDD7A", 00:18:22.956 "uuid": "39521bd5-8850-44fc-a0bb-9d7d640ddd7a", 00:18:22.956 "no_auto_visible": false 00:18:22.956 } 00:18:22.956 } 00:18:22.956 }, 00:18:22.956 { 00:18:22.956 "method": "nvmf_subsystem_add_listener", 00:18:22.956 "params": { 00:18:22.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.956 "listen_address": { 00:18:22.956 "trtype": "TCP", 00:18:22.956 "adrfam": "IPv4", 00:18:22.956 "traddr": "10.0.0.2", 00:18:22.956 "trsvcid": "4420" 00:18:22.956 }, 00:18:22.956 "secure_channel": true 00:18:22.956 } 00:18:22.956 } 00:18:22.956 ] 00:18:22.956 } 00:18:22.956 ] 00:18:22.956 }' 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3858264 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3858264 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3858264 ']' 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.956 13:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.957 [2024-07-15 13:10:44.627980] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:22.957 [2024-07-15 13:10:44.628067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.215 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.215 [2024-07-15 13:10:44.692561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.215 [2024-07-15 13:10:44.801497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.215 [2024-07-15 13:10:44.801556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.215 [2024-07-15 13:10:44.801569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.215 [2024-07-15 13:10:44.801580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.215 [2024-07-15 13:10:44.801589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.215 [2024-07-15 13:10:44.801660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.473 [2024-07-15 13:10:45.048566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.473 [2024-07-15 13:10:45.080565] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.473 [2024-07-15 13:10:45.091089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3858416 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3858416 /var/tmp/bdevperf.sock 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3858416 ']' 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.039 13:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:24.039 "subsystems": [ 00:18:24.039 { 00:18:24.039 "subsystem": "keyring", 00:18:24.039 "config": [ 00:18:24.039 { 00:18:24.039 "method": "keyring_file_add_key", 00:18:24.039 "params": { 00:18:24.039 "name": "key0", 00:18:24.039 "path": "/tmp/tmp.IMVN0fzbqL" 00:18:24.039 } 00:18:24.039 } 00:18:24.039 ] 00:18:24.039 }, 00:18:24.039 { 00:18:24.039 "subsystem": "iobuf", 00:18:24.039 "config": [ 00:18:24.039 { 00:18:24.039 "method": "iobuf_set_options", 00:18:24.039 "params": { 00:18:24.039 "small_pool_count": 8192, 00:18:24.039 "large_pool_count": 1024, 00:18:24.039 "small_bufsize": 8192, 00:18:24.039 "large_bufsize": 135168 00:18:24.039 } 00:18:24.039 } 00:18:24.039 ] 00:18:24.039 }, 00:18:24.039 { 00:18:24.039 "subsystem": "sock", 00:18:24.039 "config": [ 00:18:24.039 { 00:18:24.039 "method": "sock_set_default_impl", 00:18:24.039 "params": { 00:18:24.039 "impl_name": "posix" 00:18:24.039 } 00:18:24.039 }, 00:18:24.039 { 00:18:24.039 "method": "sock_impl_set_options", 00:18:24.039 "params": { 00:18:24.039 "impl_name": "ssl", 00:18:24.039 "recv_buf_size": 4096, 00:18:24.039 "send_buf_size": 4096, 00:18:24.039 "enable_recv_pipe": true, 00:18:24.039 "enable_quickack": false, 00:18:24.039 "enable_placement_id": 0, 00:18:24.039 "enable_zerocopy_send_server": true, 00:18:24.039 "enable_zerocopy_send_client": false, 00:18:24.039 "zerocopy_threshold": 0, 00:18:24.039 "tls_version": 0, 00:18:24.039 "enable_ktls": false 00:18:24.039 } 00:18:24.039 }, 00:18:24.039 { 00:18:24.039 "method": "sock_impl_set_options", 00:18:24.039 "params": { 00:18:24.039 "impl_name": "posix", 00:18:24.039 "recv_buf_size": 2097152, 00:18:24.039 "send_buf_size": 2097152, 00:18:24.039 "enable_recv_pipe": true, 00:18:24.039 "enable_quickack": false, 00:18:24.039 "enable_placement_id": 0, 00:18:24.039 "enable_zerocopy_send_server": true, 00:18:24.039 "enable_zerocopy_send_client": false, 00:18:24.039 "zerocopy_threshold": 0, 00:18:24.039 "tls_version": 0, 00:18:24.039 "enable_ktls": false 00:18:24.039 } 00:18:24.039 } 00:18:24.039 ] 00:18:24.039 }, 00:18:24.039 { 00:18:24.039 "subsystem": "vmd", 00:18:24.039 "config": [] 00:18:24.039 }, 00:18:24.039 { 00:18:24.039 "subsystem": "accel", 00:18:24.039 "config": [ 00:18:24.039 { 00:18:24.039 "method": "accel_set_options", 00:18:24.039 "params": { 00:18:24.039 "small_cache_size": 128, 00:18:24.039 "large_cache_size": 16, 00:18:24.039 "task_count": 2048, 00:18:24.039 "sequence_count": 2048, 00:18:24.039 "buf_count": 2048 00:18:24.039 } 00:18:24.039 } 00:18:24.039 ] 00:18:24.039 }, 00:18:24.039 { 00:18:24.039 "subsystem": "bdev", 00:18:24.040 "config": [ 00:18:24.040 { 00:18:24.040 "method": "bdev_set_options", 00:18:24.040 "params": { 00:18:24.040 "bdev_io_pool_size": 65535, 00:18:24.040 "bdev_io_cache_size": 256, 00:18:24.040 "bdev_auto_examine": true, 00:18:24.040 "iobuf_small_cache_size": 128, 00:18:24.040 "iobuf_large_cache_size": 16 00:18:24.040 } 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "method": "bdev_raid_set_options", 00:18:24.040 "params": { 00:18:24.040 "process_window_size_kb": 1024 00:18:24.040 } 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "method": "bdev_iscsi_set_options", 00:18:24.040 "params": { 00:18:24.040 "timeout_sec": 30 00:18:24.040 } 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "method": "bdev_nvme_set_options", 00:18:24.040 "params": { 00:18:24.040 "action_on_timeout": "none", 00:18:24.040 "timeout_us": 0, 00:18:24.040 "timeout_admin_us": 0, 00:18:24.040 "keep_alive_timeout_ms": 10000, 00:18:24.040 "arbitration_burst": 0, 00:18:24.040 "low_priority_weight": 0, 00:18:24.040 "medium_priority_weight": 0, 00:18:24.040 "high_priority_weight": 0, 00:18:24.040 "nvme_adminq_poll_period_us": 10000, 00:18:24.040 "nvme_ioq_poll_period_us": 0, 00:18:24.040 "io_queue_requests": 512, 00:18:24.040 "delay_cmd_submit": true, 00:18:24.040 "transport_retry_count": 4, 00:18:24.040 "bdev_retry_count": 3, 00:18:24.040 "transport_ack_timeout": 0, 00:18:24.040 "ctrlr_loss_timeout_sec": 0, 00:18:24.040 "reconnect_delay_sec": 0, 00:18:24.040 "fast_io_fail_timeout_sec": 0, 00:18:24.040 "disable_auto_failback": false, 00:18:24.040 "generate_uuids": false, 00:18:24.040 "transport_tos": 0, 00:18:24.040 "nvme_error_stat": false, 00:18:24.040 "rdma_srq_size": 0, 00:18:24.040 "io_path_stat": false, 00:18:24.040 "allow_accel_sequence": false, 00:18:24.040 "rdma_max_cq_size": 0, 00:18:24.040 "rdma_cm_event_timeout_ms": 0, 00:18:24.040 "dhchap_digests": [ 00:18:24.040 "sha256", 00:18:24.040 "sha384", 00:18:24.040 "sha512" 00:18:24.040 ], 00:18:24.040 "dhchap_dhgroups": [ 00:18:24.040 "null", 00:18:24.040 "ffdhe2048", 00:18:24.040 "ffdhe3072", 00:18:24.040 "ffdhe4096", 00:18:24.040 "ffdhe6144", 00:18:24.040 "ffdhe8192" 00:18:24.040 ] 00:18:24.040 } 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "method": "bdev_nvme_attach_controller", 00:18:24.040 "params": { 00:18:24.040 "name": "nvme0", 00:18:24.040 "trtype": "TCP", 00:18:24.040 "adrfam": "IPv4", 00:18:24.040 "traddr": "10.0.0.2", 00:18:24.040 "trsvcid": "4420", 00:18:24.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.040 "prchk_reftag": false, 00:18:24.040 "prchk_guard": false, 00:18:24.040 "ctrlr_loss_timeout_sec": 0, 00:18:24.040 "reconnect_delay_sec": 0, 00:18:24.040 "fast_io_fail_timeout_sec": 0, 00:18:24.040 "psk": "key0", 00:18:24.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.040 "hdgst": false, 00:18:24.040 "ddgst": false 00:18:24.040 } 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "method": "bdev_nvme_set_hotplug", 00:18:24.040 "params": { 00:18:24.040 "period_us": 100000, 00:18:24.040 "enable": false 00:18:24.040 } 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "method": "bdev_enable_histogram", 00:18:24.040 "params": { 00:18:24.040 "name": "nvme0n1", 00:18:24.040 "enable": true 00:18:24.040 } 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "method": "bdev_wait_for_examine" 00:18:24.040 } 00:18:24.040 ] 00:18:24.040 }, 00:18:24.040 { 00:18:24.040 "subsystem": "nbd", 00:18:24.040 "config": [] 00:18:24.040 } 00:18:24.040 ] 00:18:24.040 }' 00:18:24.040 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.040 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.040 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.040 13:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.040 [2024-07-15 13:10:45.642325] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:24.040 [2024-07-15 13:10:45.642397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858416 ] 00:18:24.040 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.040 [2024-07-15 13:10:45.703026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.298 [2024-07-15 13:10:45.819226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.555 [2024-07-15 13:10:45.998967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.119 13:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.119 13:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:25.119 13:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:25.119 13:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:25.379 13:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.379 13:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.379 Running I/O for 1 seconds... 00:18:26.313 00:18:26.314 Latency(us) 00:18:26.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.314 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:26.314 Verification LBA range: start 0x0 length 0x2000 00:18:26.314 nvme0n1 : 1.04 2643.85 10.33 0.00 0.00 47506.33 7184.69 75342.13 00:18:26.314 =================================================================================================================== 00:18:26.314 Total : 2643.85 10.33 0.00 0.00 47506.33 7184.69 75342.13 00:18:26.314 { 00:18:26.314 "core_count": 1, 00:18:26.314 "test_results": [ 00:18:26.314 { 00:18:26.314 "job": "nvme0n1", 00:18:26.314 "test_status": "finished", 00:18:26.314 "core_mask": "0x2", 00:18:26.314 "workload": "verify", 00:18:26.314 "verify_LBA_range_start": 0, 00:18:26.314 "verify_LBA_range_len": 8192, 00:18:26.314 "queue_depth": 128, 00:18:26.314 "io_size": 4096, 00:18:26.314 "runtime": 1.0424189567565918, 00:18:26.314 "io_per_second": 2643.850505410972, 00:18:26.314 "MiB_per_second": 10.327541036761609, 00:18:26.314 "fails_per_second": 0.0, 00:18:26.314 "timeout_per_second": 0.0, 00:18:26.314 "average_latency_us": 47506.33407514917, 00:18:26.314 "min_latency_us": 7184.687407407408, 00:18:26.314 "max_latency_us": 75342.1274074074 00:18:26.314 } 00:18:26.314 ] 00:18:26.314 } 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:26.314 13:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:26.314 nvmf_trace.0 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3858416 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3858416 ']' 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3858416 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3858416 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3858416' 00:18:26.571 killing process with pid 3858416 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3858416 00:18:26.571 Received shutdown signal, test time was about 1.000000 seconds 00:18:26.571 00:18:26.571 Latency(us) 00:18:26.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.571 =================================================================================================================== 00:18:26.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.571 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3858416 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.829 rmmod nvme_tcp 00:18:26.829 rmmod nvme_fabrics 00:18:26.829 rmmod nvme_keyring 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3858264 ']' 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3858264 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3858264 ']' 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3858264 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3858264 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3858264' 00:18:26.829 killing process with pid 3858264 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3858264 00:18:26.829 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3858264 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.087 13:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.617 13:10:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:29.617 13:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.oFzm7zxFRS /tmp/tmp.y5OdcfITrm /tmp/tmp.IMVN0fzbqL 00:18:29.617 00:18:29.617 real 1m22.763s 00:18:29.617 user 2m11.566s 00:18:29.617 sys 0m27.761s 00:18:29.617 13:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:29.617 13:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.617 ************************************ 00:18:29.617 END TEST nvmf_tls 00:18:29.617 ************************************ 00:18:29.617 13:10:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:29.617 13:10:50 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:29.617 13:10:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:29.617 13:10:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.617 13:10:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:29.618 ************************************ 00:18:29.618 START TEST nvmf_fips 00:18:29.618 ************************************ 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:29.618 * Looking for test storage... 00:18:29.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:29.618 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:29.619 13:10:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:29.619 Error setting digest 00:18:29.619 00D22D5F667F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:29.619 00D22D5F667F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:29.619 13:10:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:31.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:31.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:31.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:31.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:18:31.551 00:18:31.551 --- 10.0.0.2 ping statistics --- 00:18:31.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.551 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:18:31.551 00:18:31.551 --- 10.0.0.1 ping statistics --- 00:18:31.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.551 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.551 13:10:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3860768 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3860768 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3860768 ']' 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.551 13:10:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.551 [2024-07-15 13:10:53.078183] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:31.551 [2024-07-15 13:10:53.078292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.552 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.552 [2024-07-15 13:10:53.146822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.809 [2024-07-15 13:10:53.262449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.809 [2024-07-15 13:10:53.262513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.809 [2024-07-15 13:10:53.262530] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.809 [2024-07-15 13:10:53.262543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.809 [2024-07-15 13:10:53.262554] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.809 [2024-07-15 13:10:53.262595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:32.372 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:32.629 [2024-07-15 13:10:54.298669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.629 [2024-07-15 13:10:54.314654] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.629 [2024-07-15 13:10:54.314897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.887 [2024-07-15 13:10:54.346036] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:32.887 malloc0 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3860928 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3860928 /var/tmp/bdevperf.sock 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3860928 ']' 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.887 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:32.887 [2024-07-15 13:10:54.431259] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:32.887 [2024-07-15 13:10:54.431334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3860928 ] 00:18:32.887 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.887 [2024-07-15 13:10:54.488873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.145 [2024-07-15 13:10:54.596286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.145 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.145 13:10:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:33.145 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.403 [2024-07-15 13:10:54.910583] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.403 [2024-07-15 13:10:54.910708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:33.403 TLSTESTn1 00:18:33.403 13:10:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.660 Running I/O for 10 seconds... 00:18:43.620 00:18:43.620 Latency(us) 00:18:43.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.620 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:43.620 Verification LBA range: start 0x0 length 0x2000 00:18:43.620 TLSTESTn1 : 10.04 2864.40 11.19 0.00 0.00 44574.16 11213.94 71846.87 00:18:43.620 =================================================================================================================== 00:18:43.620 Total : 2864.40 11.19 0.00 0.00 44574.16 11213.94 71846.87 00:18:43.620 { 00:18:43.620 "core_count": 1, 00:18:43.620 "test_results": [ 00:18:43.620 { 00:18:43.620 "job": "TLSTESTn1", 00:18:43.620 "test_status": "finished", 00:18:43.620 "core_mask": "0x4", 00:18:43.620 "workload": "verify", 00:18:43.620 "verify_LBA_range_start": 0, 00:18:43.620 "verify_LBA_range_len": 8192, 00:18:43.620 "queue_depth": 128, 00:18:43.620 "io_size": 4096, 00:18:43.620 "runtime": 10.041887283325195, 00:18:43.620 "io_per_second": 2864.4018798458897, 00:18:43.620 "MiB_per_second": 11.189069843148006, 00:18:43.620 "fails_per_second": 0.0, 00:18:43.620 "timeout_per_second": 0.0, 00:18:43.620 "average_latency_us": 44574.15646430466, 00:18:43.620 "min_latency_us": 11213.937777777777, 00:18:43.620 "max_latency_us": 71846.87407407408 00:18:43.620 } 00:18:43.620 ] 00:18:43.620 } 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:43.620 nvmf_trace.0 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3860928 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3860928 ']' 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3860928 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3860928 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3860928' 00:18:43.620 killing process with pid 3860928 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3860928 00:18:43.620 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.620 00:18:43.620 Latency(us) 00:18:43.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.620 =================================================================================================================== 00:18:43.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.620 [2024-07-15 13:11:05.297746] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:43.620 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3860928 00:18:43.878 13:11:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:43.878 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.878 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:43.878 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.878 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:43.878 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.878 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.878 rmmod nvme_tcp 00:18:44.137 rmmod nvme_fabrics 00:18:44.137 rmmod nvme_keyring 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3860768 ']' 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3860768 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3860768 ']' 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3860768 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3860768 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3860768' 00:18:44.137 killing process with pid 3860768 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3860768 00:18:44.137 [2024-07-15 13:11:05.653476] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:44.137 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3860768 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.395 13:11:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.301 13:11:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.301 13:11:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:46.301 00:18:46.301 real 0m17.159s 00:18:46.301 user 0m18.873s 00:18:46.301 sys 0m6.957s 00:18:46.301 13:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.301 13:11:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.301 ************************************ 00:18:46.301 END TEST nvmf_fips 00:18:46.301 ************************************ 00:18:46.559 13:11:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:46.559 13:11:08 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:46.559 13:11:08 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:46.559 13:11:08 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:46.559 13:11:08 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:46.559 13:11:08 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.559 13:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:48.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:48.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.461 13:11:10 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:48.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:48.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:48.462 13:11:10 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:48.462 13:11:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:48.462 13:11:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.462 13:11:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 ************************************ 00:18:48.462 START TEST nvmf_perf_adq 00:18:48.462 ************************************ 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:48.462 * Looking for test storage... 00:18:48.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.462 13:11:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:50.360 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:50.361 13:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:50.927 13:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:53.454 13:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:58.800 13:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:58.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:58.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:58.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:58.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:18:58.801 00:18:58.801 --- 10.0.0.2 ping statistics --- 00:18:58.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.801 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:18:58.801 00:18:58.801 --- 10.0.0.1 ping statistics --- 00:18:58.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.801 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.801 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3866682 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3866682 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3866682 ']' 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.802 13:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.802 [2024-07-15 13:11:19.749972] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:18:58.802 [2024-07-15 13:11:19.750047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.802 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.802 [2024-07-15 13:11:19.818858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.802 [2024-07-15 13:11:19.936528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.802 [2024-07-15 13:11:19.936592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.802 [2024-07-15 13:11:19.936608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.802 [2024-07-15 13:11:19.936622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.802 [2024-07-15 13:11:19.936633] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.802 [2024-07-15 13:11:19.936727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.802 [2024-07-15 13:11:19.936782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.802 [2024-07-15 13:11:19.936902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.802 [2024-07-15 13:11:19.936905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.060 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.319 [2024-07-15 13:11:20.860626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.319 Malloc1 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.319 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.320 [2024-07-15 13:11:20.911697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.320 13:11:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.320 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3866840 00:18:59.320 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:59.320 13:11:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:59.320 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.221 13:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:01.482 "tick_rate": 2700000000, 00:19:01.482 "poll_groups": [ 00:19:01.482 { 00:19:01.482 "name": "nvmf_tgt_poll_group_000", 00:19:01.482 "admin_qpairs": 1, 00:19:01.482 "io_qpairs": 1, 00:19:01.482 "current_admin_qpairs": 1, 00:19:01.482 "current_io_qpairs": 1, 00:19:01.482 "pending_bdev_io": 0, 00:19:01.482 "completed_nvme_io": 20865, 00:19:01.482 "transports": [ 00:19:01.482 { 00:19:01.482 "trtype": "TCP" 00:19:01.482 } 00:19:01.482 ] 00:19:01.482 }, 00:19:01.482 { 00:19:01.482 "name": "nvmf_tgt_poll_group_001", 00:19:01.482 "admin_qpairs": 0, 00:19:01.482 "io_qpairs": 1, 00:19:01.482 "current_admin_qpairs": 0, 00:19:01.482 "current_io_qpairs": 1, 00:19:01.482 "pending_bdev_io": 0, 00:19:01.482 "completed_nvme_io": 21428, 00:19:01.482 "transports": [ 00:19:01.482 { 00:19:01.482 "trtype": "TCP" 00:19:01.482 } 00:19:01.482 ] 00:19:01.482 }, 00:19:01.482 { 00:19:01.482 "name": "nvmf_tgt_poll_group_002", 00:19:01.482 "admin_qpairs": 0, 00:19:01.482 "io_qpairs": 1, 00:19:01.482 "current_admin_qpairs": 0, 00:19:01.482 "current_io_qpairs": 1, 00:19:01.482 "pending_bdev_io": 0, 00:19:01.482 "completed_nvme_io": 17063, 00:19:01.482 "transports": [ 00:19:01.482 { 00:19:01.482 "trtype": "TCP" 00:19:01.482 } 00:19:01.482 ] 00:19:01.482 }, 00:19:01.482 { 00:19:01.482 "name": "nvmf_tgt_poll_group_003", 00:19:01.482 "admin_qpairs": 0, 00:19:01.482 "io_qpairs": 1, 00:19:01.482 "current_admin_qpairs": 0, 00:19:01.482 "current_io_qpairs": 1, 00:19:01.482 "pending_bdev_io": 0, 00:19:01.482 "completed_nvme_io": 18433, 00:19:01.482 "transports": [ 00:19:01.482 { 00:19:01.482 "trtype": "TCP" 00:19:01.482 } 00:19:01.482 ] 00:19:01.482 } 00:19:01.482 ] 00:19:01.482 }' 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:01.482 13:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3866840 00:19:09.598 Initializing NVMe Controllers 00:19:09.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:09.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:09.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:09.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:09.598 Initialization complete. Launching workers. 00:19:09.598 ======================================================== 00:19:09.598 Latency(us) 00:19:09.598 Device Information : IOPS MiB/s Average min max 00:19:09.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9663.60 37.75 6623.50 3866.70 9647.16 00:19:09.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11243.54 43.92 5693.31 2772.59 8174.95 00:19:09.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8930.83 34.89 7168.43 1878.25 10773.71 00:19:09.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10979.95 42.89 5830.08 3288.85 7866.15 00:19:09.598 ======================================================== 00:19:09.598 Total : 40817.91 159.44 6273.08 1878.25 10773.71 00:19:09.598 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.598 rmmod nvme_tcp 00:19:09.598 rmmod nvme_fabrics 00:19:09.598 rmmod nvme_keyring 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3866682 ']' 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3866682 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3866682 ']' 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3866682 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3866682 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3866682' 00:19:09.598 killing process with pid 3866682 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3866682 00:19:09.598 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3866682 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.857 13:11:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.394 13:11:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.394 13:11:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:12.394 13:11:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:12.394 13:11:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:14.930 13:11:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:20.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.201 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:20.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:20.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:20.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:19:20.202 00:19:20.202 --- 10.0.0.2 ping statistics --- 00:19:20.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.202 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:19:20.202 00:19:20.202 --- 10.0.0.1 ping statistics --- 00:19:20.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.202 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:20.202 net.core.busy_poll = 1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:20.202 net.core.busy_read = 1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3869452 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3869452 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3869452 ']' 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.202 [2024-07-15 13:11:41.334958] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:20.202 [2024-07-15 13:11:41.335032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.202 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.202 [2024-07-15 13:11:41.404422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.202 [2024-07-15 13:11:41.516897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.202 [2024-07-15 13:11:41.516952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.202 [2024-07-15 13:11:41.516965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.202 [2024-07-15 13:11:41.516976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.202 [2024-07-15 13:11:41.516986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.202 [2024-07-15 13:11:41.517061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.202 [2024-07-15 13:11:41.517133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.202 [2024-07-15 13:11:41.517198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.202 [2024-07-15 13:11:41.517200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:20.202 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 [2024-07-15 13:11:41.767874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 Malloc1 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 [2024-07-15 13:11:41.821208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3869599 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:20.203 13:11:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:20.203 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:22.728 "tick_rate": 2700000000, 00:19:22.728 "poll_groups": [ 00:19:22.728 { 00:19:22.728 "name": "nvmf_tgt_poll_group_000", 00:19:22.728 "admin_qpairs": 1, 00:19:22.728 "io_qpairs": 2, 00:19:22.728 "current_admin_qpairs": 1, 00:19:22.728 "current_io_qpairs": 2, 00:19:22.728 "pending_bdev_io": 0, 00:19:22.728 "completed_nvme_io": 21847, 00:19:22.728 "transports": [ 00:19:22.728 { 00:19:22.728 "trtype": "TCP" 00:19:22.728 } 00:19:22.728 ] 00:19:22.728 }, 00:19:22.728 { 00:19:22.728 "name": "nvmf_tgt_poll_group_001", 00:19:22.728 "admin_qpairs": 0, 00:19:22.728 "io_qpairs": 2, 00:19:22.728 "current_admin_qpairs": 0, 00:19:22.728 "current_io_qpairs": 2, 00:19:22.728 "pending_bdev_io": 0, 00:19:22.728 "completed_nvme_io": 27729, 00:19:22.728 "transports": [ 00:19:22.728 { 00:19:22.728 "trtype": "TCP" 00:19:22.728 } 00:19:22.728 ] 00:19:22.728 }, 00:19:22.728 { 00:19:22.728 "name": "nvmf_tgt_poll_group_002", 00:19:22.728 "admin_qpairs": 0, 00:19:22.728 "io_qpairs": 0, 00:19:22.728 "current_admin_qpairs": 0, 00:19:22.728 "current_io_qpairs": 0, 00:19:22.728 "pending_bdev_io": 0, 00:19:22.728 "completed_nvme_io": 0, 00:19:22.728 "transports": [ 00:19:22.728 { 00:19:22.728 "trtype": "TCP" 00:19:22.728 } 00:19:22.728 ] 00:19:22.728 }, 00:19:22.728 { 00:19:22.728 "name": "nvmf_tgt_poll_group_003", 00:19:22.728 "admin_qpairs": 0, 00:19:22.728 "io_qpairs": 0, 00:19:22.728 "current_admin_qpairs": 0, 00:19:22.728 "current_io_qpairs": 0, 00:19:22.728 "pending_bdev_io": 0, 00:19:22.728 "completed_nvme_io": 0, 00:19:22.728 "transports": [ 00:19:22.728 { 00:19:22.728 "trtype": "TCP" 00:19:22.728 } 00:19:22.728 ] 00:19:22.728 } 00:19:22.728 ] 00:19:22.728 }' 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:22.728 13:11:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3869599 00:19:30.890 Initializing NVMe Controllers 00:19:30.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:30.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:30.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:30.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:30.890 Initialization complete. Launching workers. 00:19:30.890 ======================================================== 00:19:30.890 Latency(us) 00:19:30.890 Device Information : IOPS MiB/s Average min max 00:19:30.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5725.59 22.37 11218.51 1765.04 56954.86 00:19:30.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7184.58 28.06 8911.33 1707.11 53928.33 00:19:30.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5856.29 22.88 10932.76 1626.49 57849.30 00:19:30.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7291.38 28.48 8807.77 1815.81 52875.83 00:19:30.890 ======================================================== 00:19:30.890 Total : 26057.84 101.79 9843.60 1626.49 57849.30 00:19:30.890 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.890 rmmod nvme_tcp 00:19:30.890 rmmod nvme_fabrics 00:19:30.890 rmmod nvme_keyring 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3869452 ']' 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3869452 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3869452 ']' 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3869452 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3869452 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3869452' 00:19:30.890 killing process with pid 3869452 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3869452 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3869452 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.890 13:11:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.793 13:11:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.793 13:11:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:32.793 00:19:32.793 real 0m44.360s 00:19:32.793 user 2m36.046s 00:19:32.793 sys 0m12.034s 00:19:32.793 13:11:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:32.793 13:11:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:32.793 ************************************ 00:19:32.793 END TEST nvmf_perf_adq 00:19:32.793 ************************************ 00:19:32.793 13:11:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:32.793 13:11:54 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:32.793 13:11:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:32.793 13:11:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.793 13:11:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:33.052 ************************************ 00:19:33.052 START TEST nvmf_shutdown 00:19:33.052 ************************************ 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:33.052 * Looking for test storage... 00:19:33.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:33.052 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:33.053 ************************************ 00:19:33.053 START TEST nvmf_shutdown_tc1 00:19:33.053 ************************************ 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.053 13:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:34.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:34.955 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:34.956 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:34.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:34.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:34.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:19:34.956 00:19:34.956 --- 10.0.0.2 ping statistics --- 00:19:34.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.956 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:19:34.956 00:19:34.956 --- 10.0.0.1 ping statistics --- 00:19:34.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.956 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3872762 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3872762 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3872762 ']' 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.956 13:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:35.215 [2024-07-15 13:11:56.673961] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:35.215 [2024-07-15 13:11:56.674038] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.215 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.215 [2024-07-15 13:11:56.743981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.215 [2024-07-15 13:11:56.864848] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.215 [2024-07-15 13:11:56.864914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.215 [2024-07-15 13:11:56.864951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.215 [2024-07-15 13:11:56.864965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.215 [2024-07-15 13:11:56.864990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.215 [2024-07-15 13:11:56.865276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.215 [2024-07-15 13:11:56.865301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.215 [2024-07-15 13:11:56.865360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:35.215 [2024-07-15 13:11:56.865363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.150 [2024-07-15 13:11:57.670941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.150 13:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.150 Malloc1 00:19:36.150 [2024-07-15 13:11:57.760138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.150 Malloc2 00:19:36.150 Malloc3 00:19:36.408 Malloc4 00:19:36.408 Malloc5 00:19:36.408 Malloc6 00:19:36.408 Malloc7 00:19:36.408 Malloc8 00:19:36.667 Malloc9 00:19:36.667 Malloc10 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3872953 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3872953 /var/tmp/bdevperf.sock 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3872953 ']' 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "name": "Nvme$subsystem", 00:19:36.667 "trtype": "$TEST_TRANSPORT", 00:19:36.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.667 "adrfam": "ipv4", 00:19:36.667 "trsvcid": "$NVMF_PORT", 00:19:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.667 "hdgst": ${hdgst:-false}, 00:19:36.667 "ddgst": ${ddgst:-false} 00:19:36.667 }, 00:19:36.667 "method": "bdev_nvme_attach_controller" 00:19:36.667 } 00:19:36.667 EOF 00:19:36.667 )") 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "name": "Nvme$subsystem", 00:19:36.667 "trtype": "$TEST_TRANSPORT", 00:19:36.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.667 "adrfam": "ipv4", 00:19:36.667 "trsvcid": "$NVMF_PORT", 00:19:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.667 "hdgst": ${hdgst:-false}, 00:19:36.667 "ddgst": ${ddgst:-false} 00:19:36.667 }, 00:19:36.667 "method": "bdev_nvme_attach_controller" 00:19:36.667 } 00:19:36.667 EOF 00:19:36.667 )") 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "name": "Nvme$subsystem", 00:19:36.667 "trtype": "$TEST_TRANSPORT", 00:19:36.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.667 "adrfam": "ipv4", 00:19:36.667 "trsvcid": "$NVMF_PORT", 00:19:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.667 "hdgst": ${hdgst:-false}, 00:19:36.667 "ddgst": ${ddgst:-false} 00:19:36.667 }, 00:19:36.667 "method": "bdev_nvme_attach_controller" 00:19:36.667 } 00:19:36.667 EOF 00:19:36.667 )") 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "name": "Nvme$subsystem", 00:19:36.667 "trtype": "$TEST_TRANSPORT", 00:19:36.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.667 "adrfam": "ipv4", 00:19:36.667 "trsvcid": "$NVMF_PORT", 00:19:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.667 "hdgst": ${hdgst:-false}, 00:19:36.667 "ddgst": ${ddgst:-false} 00:19:36.667 }, 00:19:36.667 "method": "bdev_nvme_attach_controller" 00:19:36.667 } 00:19:36.667 EOF 00:19:36.667 )") 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "name": "Nvme$subsystem", 00:19:36.667 "trtype": "$TEST_TRANSPORT", 00:19:36.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.667 "adrfam": "ipv4", 00:19:36.667 "trsvcid": "$NVMF_PORT", 00:19:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.667 "hdgst": ${hdgst:-false}, 00:19:36.667 "ddgst": ${ddgst:-false} 00:19:36.667 }, 00:19:36.667 "method": "bdev_nvme_attach_controller" 00:19:36.667 } 00:19:36.667 EOF 00:19:36.667 )") 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "name": "Nvme$subsystem", 00:19:36.667 "trtype": "$TEST_TRANSPORT", 00:19:36.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.667 "adrfam": "ipv4", 00:19:36.667 "trsvcid": "$NVMF_PORT", 00:19:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.667 "hdgst": ${hdgst:-false}, 00:19:36.667 "ddgst": ${ddgst:-false} 00:19:36.667 }, 00:19:36.667 "method": "bdev_nvme_attach_controller" 00:19:36.667 } 00:19:36.667 EOF 00:19:36.667 )") 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.667 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "name": "Nvme$subsystem", 00:19:36.667 "trtype": "$TEST_TRANSPORT", 00:19:36.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.667 "adrfam": "ipv4", 00:19:36.667 "trsvcid": "$NVMF_PORT", 00:19:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.667 "hdgst": ${hdgst:-false}, 00:19:36.667 "ddgst": ${ddgst:-false} 00:19:36.667 }, 00:19:36.667 "method": "bdev_nvme_attach_controller" 00:19:36.667 } 00:19:36.667 EOF 00:19:36.667 )") 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.668 { 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme$subsystem", 00:19:36.668 "trtype": "$TEST_TRANSPORT", 00:19:36.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "$NVMF_PORT", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.668 "hdgst": ${hdgst:-false}, 00:19:36.668 "ddgst": ${ddgst:-false} 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 } 00:19:36.668 EOF 00:19:36.668 )") 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.668 { 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme$subsystem", 00:19:36.668 "trtype": "$TEST_TRANSPORT", 00:19:36.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "$NVMF_PORT", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.668 "hdgst": ${hdgst:-false}, 00:19:36.668 "ddgst": ${ddgst:-false} 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 } 00:19:36.668 EOF 00:19:36.668 )") 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.668 { 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme$subsystem", 00:19:36.668 "trtype": "$TEST_TRANSPORT", 00:19:36.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "$NVMF_PORT", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.668 "hdgst": ${hdgst:-false}, 00:19:36.668 "ddgst": ${ddgst:-false} 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 } 00:19:36.668 EOF 00:19:36.668 )") 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:36.668 13:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme1", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme2", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme3", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme4", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme5", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme6", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme7", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme8", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme9", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 },{ 00:19:36.668 "params": { 00:19:36.668 "name": "Nvme10", 00:19:36.668 "trtype": "tcp", 00:19:36.668 "traddr": "10.0.0.2", 00:19:36.668 "adrfam": "ipv4", 00:19:36.668 "trsvcid": "4420", 00:19:36.668 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:36.668 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:36.668 "hdgst": false, 00:19:36.668 "ddgst": false 00:19:36.668 }, 00:19:36.668 "method": "bdev_nvme_attach_controller" 00:19:36.668 }' 00:19:36.668 [2024-07-15 13:11:58.271796] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:36.668 [2024-07-15 13:11:58.271907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:36.668 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.668 [2024-07-15 13:11:58.336318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.926 [2024-07-15 13:11:58.446827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3872953 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:38.822 13:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:39.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3872953 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:39.754 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3872762 00:19:39.754 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:39.754 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:39.754 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:39.754 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.755 "trsvcid": "$NVMF_PORT", 00:19:39.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.755 "hdgst": ${hdgst:-false}, 00:19:39.755 "ddgst": ${ddgst:-false} 00:19:39.755 }, 00:19:39.755 "method": "bdev_nvme_attach_controller" 00:19:39.755 } 00:19:39.755 EOF 00:19:39.755 )") 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.755 "trsvcid": "$NVMF_PORT", 00:19:39.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.755 "hdgst": ${hdgst:-false}, 00:19:39.755 "ddgst": ${ddgst:-false} 00:19:39.755 }, 00:19:39.755 "method": "bdev_nvme_attach_controller" 00:19:39.755 } 00:19:39.755 EOF 00:19:39.755 )") 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.755 "trsvcid": "$NVMF_PORT", 00:19:39.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.755 "hdgst": ${hdgst:-false}, 00:19:39.755 "ddgst": ${ddgst:-false} 00:19:39.755 }, 00:19:39.755 "method": "bdev_nvme_attach_controller" 00:19:39.755 } 00:19:39.755 EOF 00:19:39.755 )") 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.755 "trsvcid": "$NVMF_PORT", 00:19:39.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.755 "hdgst": ${hdgst:-false}, 00:19:39.755 "ddgst": ${ddgst:-false} 00:19:39.755 }, 00:19:39.755 "method": "bdev_nvme_attach_controller" 00:19:39.755 } 00:19:39.755 EOF 00:19:39.755 )") 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.755 "trsvcid": "$NVMF_PORT", 00:19:39.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.755 "hdgst": ${hdgst:-false}, 00:19:39.755 "ddgst": ${ddgst:-false} 00:19:39.755 }, 00:19:39.755 "method": "bdev_nvme_attach_controller" 00:19:39.755 } 00:19:39.755 EOF 00:19:39.755 )") 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.755 "trsvcid": "$NVMF_PORT", 00:19:39.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.755 "hdgst": ${hdgst:-false}, 00:19:39.755 "ddgst": ${ddgst:-false} 00:19:39.755 }, 00:19:39.755 "method": "bdev_nvme_attach_controller" 00:19:39.755 } 00:19:39.755 EOF 00:19:39.755 )") 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.755 "trsvcid": "$NVMF_PORT", 00:19:39.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.755 "hdgst": ${hdgst:-false}, 00:19:39.755 "ddgst": ${ddgst:-false} 00:19:39.755 }, 00:19:39.755 "method": "bdev_nvme_attach_controller" 00:19:39.755 } 00:19:39.755 EOF 00:19:39.755 )") 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.755 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.755 { 00:19:39.755 "params": { 00:19:39.755 "name": "Nvme$subsystem", 00:19:39.755 "trtype": "$TEST_TRANSPORT", 00:19:39.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.755 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "$NVMF_PORT", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.756 "hdgst": ${hdgst:-false}, 00:19:39.756 "ddgst": ${ddgst:-false} 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 } 00:19:39.756 EOF 00:19:39.756 )") 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.756 { 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme$subsystem", 00:19:39.756 "trtype": "$TEST_TRANSPORT", 00:19:39.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "$NVMF_PORT", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.756 "hdgst": ${hdgst:-false}, 00:19:39.756 "ddgst": ${ddgst:-false} 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 } 00:19:39.756 EOF 00:19:39.756 )") 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.756 { 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme$subsystem", 00:19:39.756 "trtype": "$TEST_TRANSPORT", 00:19:39.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "$NVMF_PORT", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.756 "hdgst": ${hdgst:-false}, 00:19:39.756 "ddgst": ${ddgst:-false} 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 } 00:19:39.756 EOF 00:19:39.756 )") 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:39.756 13:12:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme1", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme2", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme3", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme4", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme5", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme6", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme7", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme8", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme9", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 },{ 00:19:39.756 "params": { 00:19:39.756 "name": "Nvme10", 00:19:39.756 "trtype": "tcp", 00:19:39.756 "traddr": "10.0.0.2", 00:19:39.756 "adrfam": "ipv4", 00:19:39.756 "trsvcid": "4420", 00:19:39.756 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:39.756 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:39.756 "hdgst": false, 00:19:39.756 "ddgst": false 00:19:39.756 }, 00:19:39.756 "method": "bdev_nvme_attach_controller" 00:19:39.756 }' 00:19:39.756 [2024-07-15 13:12:01.290652] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:39.756 [2024-07-15 13:12:01.290728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873451 ] 00:19:39.756 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.756 [2024-07-15 13:12:01.355827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.014 [2024-07-15 13:12:01.468784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.383 Running I/O for 1 seconds... 00:19:42.316 00:19:42.316 Latency(us) 00:19:42.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme1n1 : 1.16 220.84 13.80 0.00 0.00 286947.37 22233.69 251658.24 00:19:42.316 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme2n1 : 1.16 220.30 13.77 0.00 0.00 281308.35 20777.34 268746.15 00:19:42.316 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme3n1 : 1.08 236.42 14.78 0.00 0.00 258974.15 18447.17 278066.82 00:19:42.316 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme4n1 : 1.18 272.06 17.00 0.00 0.00 221974.98 18350.08 236123.78 00:19:42.316 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme5n1 : 1.17 218.89 13.68 0.00 0.00 271708.54 36311.80 239230.67 00:19:42.316 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme6n1 : 1.17 218.11 13.63 0.00 0.00 268064.05 20097.71 253211.69 00:19:42.316 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme7n1 : 1.19 268.67 16.79 0.00 0.00 214426.36 15534.46 256318.58 00:19:42.316 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme8n1 : 1.20 267.74 16.73 0.00 0.00 211757.13 14951.92 262532.36 00:19:42.316 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme9n1 : 1.18 216.06 13.50 0.00 0.00 257488.21 20388.98 285834.05 00:19:42.316 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.316 Verification LBA range: start 0x0 length 0x400 00:19:42.316 Nvme10n1 : 1.19 215.90 13.49 0.00 0.00 253604.98 26214.40 265639.25 00:19:42.316 =================================================================================================================== 00:19:42.316 Total : 2354.99 147.19 0.00 0.00 250073.84 14951.92 285834.05 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.573 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.573 rmmod nvme_tcp 00:19:42.573 rmmod nvme_fabrics 00:19:42.829 rmmod nvme_keyring 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3872762 ']' 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3872762 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3872762 ']' 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3872762 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3872762 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3872762' 00:19:42.829 killing process with pid 3872762 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3872762 00:19:42.829 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3872762 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.392 13:12:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.291 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.291 00:19:45.291 real 0m12.326s 00:19:45.291 user 0m36.500s 00:19:45.291 sys 0m3.175s 00:19:45.291 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.291 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.291 ************************************ 00:19:45.291 END TEST nvmf_shutdown_tc1 00:19:45.291 ************************************ 00:19:45.291 13:12:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:45.291 13:12:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:45.292 ************************************ 00:19:45.292 START TEST nvmf_shutdown_tc2 00:19:45.292 ************************************ 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:45.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:45.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:45.292 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:45.292 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.292 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.551 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.551 13:12:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:19:45.551 00:19:45.551 --- 10.0.0.2 ping statistics --- 00:19:45.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.551 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:19:45.551 00:19:45.551 --- 10.0.0.1 ping statistics --- 00:19:45.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.551 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3874243 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3874243 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3874243 ']' 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.551 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:45.551 [2024-07-15 13:12:07.185406] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:45.551 [2024-07-15 13:12:07.185500] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.551 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.809 [2024-07-15 13:12:07.253077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.809 [2024-07-15 13:12:07.363466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.809 [2024-07-15 13:12:07.363522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.809 [2024-07-15 13:12:07.363558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.809 [2024-07-15 13:12:07.363569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.809 [2024-07-15 13:12:07.363578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.809 [2024-07-15 13:12:07.363678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.809 [2024-07-15 13:12:07.363748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.809 [2024-07-15 13:12:07.363807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.809 [2024-07-15 13:12:07.363809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.809 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.809 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:45.809 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.809 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.809 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.072 [2024-07-15 13:12:07.522648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.072 13:12:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.072 Malloc1 00:19:46.072 [2024-07-15 13:12:07.602377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.072 Malloc2 00:19:46.072 Malloc3 00:19:46.072 Malloc4 00:19:46.340 Malloc5 00:19:46.340 Malloc6 00:19:46.340 Malloc7 00:19:46.340 Malloc8 00:19:46.340 Malloc9 00:19:46.340 Malloc10 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3874780 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3874780 /var/tmp/bdevperf.sock 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3874780 ']' 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.600 { 00:19:46.600 "params": { 00:19:46.600 "name": "Nvme$subsystem", 00:19:46.600 "trtype": "$TEST_TRANSPORT", 00:19:46.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.600 "adrfam": "ipv4", 00:19:46.600 "trsvcid": "$NVMF_PORT", 00:19:46.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.600 "hdgst": ${hdgst:-false}, 00:19:46.600 "ddgst": ${ddgst:-false} 00:19:46.600 }, 00:19:46.600 "method": "bdev_nvme_attach_controller" 00:19:46.600 } 00:19:46.600 EOF 00:19:46.600 )") 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.600 { 00:19:46.600 "params": { 00:19:46.600 "name": "Nvme$subsystem", 00:19:46.600 "trtype": "$TEST_TRANSPORT", 00:19:46.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.600 "adrfam": "ipv4", 00:19:46.600 "trsvcid": "$NVMF_PORT", 00:19:46.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.600 "hdgst": ${hdgst:-false}, 00:19:46.600 "ddgst": ${ddgst:-false} 00:19:46.600 }, 00:19:46.600 "method": "bdev_nvme_attach_controller" 00:19:46.600 } 00:19:46.600 EOF 00:19:46.600 )") 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.600 { 00:19:46.600 "params": { 00:19:46.600 "name": "Nvme$subsystem", 00:19:46.600 "trtype": "$TEST_TRANSPORT", 00:19:46.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.600 "adrfam": "ipv4", 00:19:46.600 "trsvcid": "$NVMF_PORT", 00:19:46.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.600 "hdgst": ${hdgst:-false}, 00:19:46.600 "ddgst": ${ddgst:-false} 00:19:46.600 }, 00:19:46.600 "method": "bdev_nvme_attach_controller" 00:19:46.600 } 00:19:46.600 EOF 00:19:46.600 )") 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.600 { 00:19:46.600 "params": { 00:19:46.600 "name": "Nvme$subsystem", 00:19:46.600 "trtype": "$TEST_TRANSPORT", 00:19:46.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.600 "adrfam": "ipv4", 00:19:46.600 "trsvcid": "$NVMF_PORT", 00:19:46.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.600 "hdgst": ${hdgst:-false}, 00:19:46.600 "ddgst": ${ddgst:-false} 00:19:46.600 }, 00:19:46.600 "method": "bdev_nvme_attach_controller" 00:19:46.600 } 00:19:46.600 EOF 00:19:46.600 )") 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.600 { 00:19:46.600 "params": { 00:19:46.600 "name": "Nvme$subsystem", 00:19:46.600 "trtype": "$TEST_TRANSPORT", 00:19:46.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.600 "adrfam": "ipv4", 00:19:46.600 "trsvcid": "$NVMF_PORT", 00:19:46.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.600 "hdgst": ${hdgst:-false}, 00:19:46.600 "ddgst": ${ddgst:-false} 00:19:46.600 }, 00:19:46.600 "method": "bdev_nvme_attach_controller" 00:19:46.600 } 00:19:46.600 EOF 00:19:46.600 )") 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.600 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.600 { 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme$subsystem", 00:19:46.601 "trtype": "$TEST_TRANSPORT", 00:19:46.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "$NVMF_PORT", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.601 "hdgst": ${hdgst:-false}, 00:19:46.601 "ddgst": ${ddgst:-false} 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 } 00:19:46.601 EOF 00:19:46.601 )") 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.601 { 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme$subsystem", 00:19:46.601 "trtype": "$TEST_TRANSPORT", 00:19:46.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "$NVMF_PORT", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.601 "hdgst": ${hdgst:-false}, 00:19:46.601 "ddgst": ${ddgst:-false} 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 } 00:19:46.601 EOF 00:19:46.601 )") 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.601 { 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme$subsystem", 00:19:46.601 "trtype": "$TEST_TRANSPORT", 00:19:46.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "$NVMF_PORT", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.601 "hdgst": ${hdgst:-false}, 00:19:46.601 "ddgst": ${ddgst:-false} 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 } 00:19:46.601 EOF 00:19:46.601 )") 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.601 { 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme$subsystem", 00:19:46.601 "trtype": "$TEST_TRANSPORT", 00:19:46.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "$NVMF_PORT", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.601 "hdgst": ${hdgst:-false}, 00:19:46.601 "ddgst": ${ddgst:-false} 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 } 00:19:46.601 EOF 00:19:46.601 )") 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.601 { 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme$subsystem", 00:19:46.601 "trtype": "$TEST_TRANSPORT", 00:19:46.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "$NVMF_PORT", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.601 "hdgst": ${hdgst:-false}, 00:19:46.601 "ddgst": ${ddgst:-false} 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 } 00:19:46.601 EOF 00:19:46.601 )") 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:46.601 13:12:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme1", 00:19:46.601 "trtype": "tcp", 00:19:46.601 "traddr": "10.0.0.2", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "4420", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.601 "hdgst": false, 00:19:46.601 "ddgst": false 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 },{ 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme2", 00:19:46.601 "trtype": "tcp", 00:19:46.601 "traddr": "10.0.0.2", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "4420", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:46.601 "hdgst": false, 00:19:46.601 "ddgst": false 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 },{ 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme3", 00:19:46.601 "trtype": "tcp", 00:19:46.601 "traddr": "10.0.0.2", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "4420", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:46.601 "hdgst": false, 00:19:46.601 "ddgst": false 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 },{ 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme4", 00:19:46.601 "trtype": "tcp", 00:19:46.601 "traddr": "10.0.0.2", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "4420", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:46.601 "hdgst": false, 00:19:46.601 "ddgst": false 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 },{ 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme5", 00:19:46.601 "trtype": "tcp", 00:19:46.601 "traddr": "10.0.0.2", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "4420", 00:19:46.601 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:46.601 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:46.601 "hdgst": false, 00:19:46.601 "ddgst": false 00:19:46.601 }, 00:19:46.601 "method": "bdev_nvme_attach_controller" 00:19:46.601 },{ 00:19:46.601 "params": { 00:19:46.601 "name": "Nvme6", 00:19:46.601 "trtype": "tcp", 00:19:46.601 "traddr": "10.0.0.2", 00:19:46.601 "adrfam": "ipv4", 00:19:46.601 "trsvcid": "4420", 00:19:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:46.602 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:46.602 "hdgst": false, 00:19:46.602 "ddgst": false 00:19:46.602 }, 00:19:46.602 "method": "bdev_nvme_attach_controller" 00:19:46.602 },{ 00:19:46.602 "params": { 00:19:46.602 "name": "Nvme7", 00:19:46.602 "trtype": "tcp", 00:19:46.602 "traddr": "10.0.0.2", 00:19:46.602 "adrfam": "ipv4", 00:19:46.602 "trsvcid": "4420", 00:19:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:46.602 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:46.602 "hdgst": false, 00:19:46.602 "ddgst": false 00:19:46.602 }, 00:19:46.602 "method": "bdev_nvme_attach_controller" 00:19:46.602 },{ 00:19:46.602 "params": { 00:19:46.602 "name": "Nvme8", 00:19:46.602 "trtype": "tcp", 00:19:46.602 "traddr": "10.0.0.2", 00:19:46.602 "adrfam": "ipv4", 00:19:46.602 "trsvcid": "4420", 00:19:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:46.602 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:46.602 "hdgst": false, 00:19:46.602 "ddgst": false 00:19:46.602 }, 00:19:46.602 "method": "bdev_nvme_attach_controller" 00:19:46.602 },{ 00:19:46.602 "params": { 00:19:46.602 "name": "Nvme9", 00:19:46.602 "trtype": "tcp", 00:19:46.602 "traddr": "10.0.0.2", 00:19:46.602 "adrfam": "ipv4", 00:19:46.602 "trsvcid": "4420", 00:19:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:46.602 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:46.602 "hdgst": false, 00:19:46.602 "ddgst": false 00:19:46.602 }, 00:19:46.602 "method": "bdev_nvme_attach_controller" 00:19:46.602 },{ 00:19:46.602 "params": { 00:19:46.602 "name": "Nvme10", 00:19:46.602 "trtype": "tcp", 00:19:46.602 "traddr": "10.0.0.2", 00:19:46.602 "adrfam": "ipv4", 00:19:46.602 "trsvcid": "4420", 00:19:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:46.602 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:46.602 "hdgst": false, 00:19:46.602 "ddgst": false 00:19:46.602 }, 00:19:46.602 "method": "bdev_nvme_attach_controller" 00:19:46.602 }' 00:19:46.602 [2024-07-15 13:12:08.106745] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:46.602 [2024-07-15 13:12:08.106821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874780 ] 00:19:46.602 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.602 [2024-07-15 13:12:08.169860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.602 [2024-07-15 13:12:08.281596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.502 Running I/O for 10 seconds... 00:19:48.502 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.502 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:48.502 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:48.502 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.502 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.502 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.502 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.761 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3874780 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3874780 ']' 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3874780 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3874780 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3874780' 00:19:49.019 killing process with pid 3874780 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3874780 00:19:49.019 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3874780 00:19:49.019 Received shutdown signal, test time was about 0.874414 seconds 00:19:49.019 00:19:49.019 Latency(us) 00:19:49.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme1n1 : 0.85 225.71 14.11 0.00 0.00 279961.85 20486.07 265639.25 00:19:49.019 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme2n1 : 0.84 228.96 14.31 0.00 0.00 268791.15 34369.99 231463.44 00:19:49.019 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme3n1 : 0.83 231.23 14.45 0.00 0.00 260591.06 20194.80 264085.81 00:19:49.019 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme4n1 : 0.86 228.44 14.28 0.00 0.00 258119.95 2390.85 256318.58 00:19:49.019 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme5n1 : 0.85 248.23 15.51 0.00 0.00 227409.46 12815.93 256318.58 00:19:49.019 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme6n1 : 0.86 222.79 13.92 0.00 0.00 253421.04 22233.69 242337.56 00:19:49.019 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme7n1 : 0.87 221.84 13.86 0.00 0.00 248863.86 22039.51 264085.81 00:19:49.019 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme8n1 : 0.84 227.90 14.24 0.00 0.00 235267.41 20486.07 240784.12 00:19:49.019 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme9n1 : 0.87 221.05 13.82 0.00 0.00 238196.05 20291.89 268746.15 00:19:49.019 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.019 Verification LBA range: start 0x0 length 0x400 00:19:49.019 Nvme10n1 : 0.87 219.78 13.74 0.00 0.00 234029.01 21554.06 298261.62 00:19:49.019 =================================================================================================================== 00:19:49.019 Total : 2275.94 142.25 0.00 0.00 250267.15 2390.85 298261.62 00:19:49.277 13:12:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3874243 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.209 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.209 rmmod nvme_tcp 00:19:50.209 rmmod nvme_fabrics 00:19:50.467 rmmod nvme_keyring 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3874243 ']' 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3874243 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3874243 ']' 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3874243 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3874243 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3874243' 00:19:50.467 killing process with pid 3874243 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3874243 00:19:50.467 13:12:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3874243 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.033 13:12:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.939 00:19:52.939 real 0m7.537s 00:19:52.939 user 0m22.360s 00:19:52.939 sys 0m1.489s 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.939 ************************************ 00:19:52.939 END TEST nvmf_shutdown_tc2 00:19:52.939 ************************************ 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:52.939 ************************************ 00:19:52.939 START TEST nvmf_shutdown_tc3 00:19:52.939 ************************************ 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:52.939 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:52.939 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:52.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:52.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.939 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.940 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:53.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:19:53.199 00:19:53.199 --- 10.0.0.2 ping statistics --- 00:19:53.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.199 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:53.199 00:19:53.199 --- 10.0.0.1 ping statistics --- 00:19:53.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.199 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3875831 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3875831 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3875831 ']' 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.199 13:12:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.199 [2024-07-15 13:12:14.783848] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:53.199 [2024-07-15 13:12:14.783970] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.199 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.199 [2024-07-15 13:12:14.854055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.459 [2024-07-15 13:12:14.971264] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.460 [2024-07-15 13:12:14.971336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.460 [2024-07-15 13:12:14.971352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.460 [2024-07-15 13:12:14.971366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.460 [2024-07-15 13:12:14.971377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.460 [2024-07-15 13:12:14.971485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.460 [2024-07-15 13:12:14.971583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.460 [2024-07-15 13:12:14.971649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:53.460 [2024-07-15 13:12:14.971651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.027 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.027 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:54.027 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.027 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.027 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.288 [2024-07-15 13:12:15.746665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.288 13:12:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.288 Malloc1 00:19:54.288 [2024-07-15 13:12:15.821687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.288 Malloc2 00:19:54.288 Malloc3 00:19:54.288 Malloc4 00:19:54.547 Malloc5 00:19:54.547 Malloc6 00:19:54.547 Malloc7 00:19:54.547 Malloc8 00:19:54.547 Malloc9 00:19:54.547 Malloc10 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3876022 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3876022 /var/tmp/bdevperf.sock 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3876022 ']' 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.805 { 00:19:54.805 "params": { 00:19:54.805 "name": "Nvme$subsystem", 00:19:54.805 "trtype": "$TEST_TRANSPORT", 00:19:54.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.805 "adrfam": "ipv4", 00:19:54.805 "trsvcid": "$NVMF_PORT", 00:19:54.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.805 "hdgst": ${hdgst:-false}, 00:19:54.805 "ddgst": ${ddgst:-false} 00:19:54.805 }, 00:19:54.805 "method": "bdev_nvme_attach_controller" 00:19:54.805 } 00:19:54.805 EOF 00:19:54.805 )") 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.805 { 00:19:54.805 "params": { 00:19:54.805 "name": "Nvme$subsystem", 00:19:54.805 "trtype": "$TEST_TRANSPORT", 00:19:54.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.805 "adrfam": "ipv4", 00:19:54.805 "trsvcid": "$NVMF_PORT", 00:19:54.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.805 "hdgst": ${hdgst:-false}, 00:19:54.805 "ddgst": ${ddgst:-false} 00:19:54.805 }, 00:19:54.805 "method": "bdev_nvme_attach_controller" 00:19:54.805 } 00:19:54.805 EOF 00:19:54.805 )") 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.805 { 00:19:54.805 "params": { 00:19:54.805 "name": "Nvme$subsystem", 00:19:54.805 "trtype": "$TEST_TRANSPORT", 00:19:54.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.805 "adrfam": "ipv4", 00:19:54.805 "trsvcid": "$NVMF_PORT", 00:19:54.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.805 "hdgst": ${hdgst:-false}, 00:19:54.805 "ddgst": ${ddgst:-false} 00:19:54.805 }, 00:19:54.805 "method": "bdev_nvme_attach_controller" 00:19:54.805 } 00:19:54.805 EOF 00:19:54.805 )") 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.805 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.806 { 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme$subsystem", 00:19:54.806 "trtype": "$TEST_TRANSPORT", 00:19:54.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "$NVMF_PORT", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.806 "hdgst": ${hdgst:-false}, 00:19:54.806 "ddgst": ${ddgst:-false} 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 } 00:19:54.806 EOF 00:19:54.806 )") 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.806 { 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme$subsystem", 00:19:54.806 "trtype": "$TEST_TRANSPORT", 00:19:54.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "$NVMF_PORT", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.806 "hdgst": ${hdgst:-false}, 00:19:54.806 "ddgst": ${ddgst:-false} 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 } 00:19:54.806 EOF 00:19:54.806 )") 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.806 { 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme$subsystem", 00:19:54.806 "trtype": "$TEST_TRANSPORT", 00:19:54.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "$NVMF_PORT", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.806 "hdgst": ${hdgst:-false}, 00:19:54.806 "ddgst": ${ddgst:-false} 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 } 00:19:54.806 EOF 00:19:54.806 )") 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.806 { 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme$subsystem", 00:19:54.806 "trtype": "$TEST_TRANSPORT", 00:19:54.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "$NVMF_PORT", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.806 "hdgst": ${hdgst:-false}, 00:19:54.806 "ddgst": ${ddgst:-false} 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 } 00:19:54.806 EOF 00:19:54.806 )") 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.806 { 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme$subsystem", 00:19:54.806 "trtype": "$TEST_TRANSPORT", 00:19:54.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "$NVMF_PORT", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.806 "hdgst": ${hdgst:-false}, 00:19:54.806 "ddgst": ${ddgst:-false} 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 } 00:19:54.806 EOF 00:19:54.806 )") 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.806 { 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme$subsystem", 00:19:54.806 "trtype": "$TEST_TRANSPORT", 00:19:54.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "$NVMF_PORT", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.806 "hdgst": ${hdgst:-false}, 00:19:54.806 "ddgst": ${ddgst:-false} 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 } 00:19:54.806 EOF 00:19:54.806 )") 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.806 { 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme$subsystem", 00:19:54.806 "trtype": "$TEST_TRANSPORT", 00:19:54.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "$NVMF_PORT", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.806 "hdgst": ${hdgst:-false}, 00:19:54.806 "ddgst": ${ddgst:-false} 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 } 00:19:54.806 EOF 00:19:54.806 )") 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:54.806 13:12:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme1", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme2", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme3", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme4", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme5", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme6", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme7", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme8", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.806 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:54.806 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:54.806 "hdgst": false, 00:19:54.806 "ddgst": false 00:19:54.806 }, 00:19:54.806 "method": "bdev_nvme_attach_controller" 00:19:54.806 },{ 00:19:54.806 "params": { 00:19:54.806 "name": "Nvme9", 00:19:54.806 "trtype": "tcp", 00:19:54.806 "traddr": "10.0.0.2", 00:19:54.806 "adrfam": "ipv4", 00:19:54.806 "trsvcid": "4420", 00:19:54.807 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:54.807 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:54.807 "hdgst": false, 00:19:54.807 "ddgst": false 00:19:54.807 }, 00:19:54.807 "method": "bdev_nvme_attach_controller" 00:19:54.807 },{ 00:19:54.807 "params": { 00:19:54.807 "name": "Nvme10", 00:19:54.807 "trtype": "tcp", 00:19:54.807 "traddr": "10.0.0.2", 00:19:54.807 "adrfam": "ipv4", 00:19:54.807 "trsvcid": "4420", 00:19:54.807 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:54.807 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:54.807 "hdgst": false, 00:19:54.807 "ddgst": false 00:19:54.807 }, 00:19:54.807 "method": "bdev_nvme_attach_controller" 00:19:54.807 }' 00:19:54.807 [2024-07-15 13:12:16.337498] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:19:54.807 [2024-07-15 13:12:16.337577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876022 ] 00:19:54.807 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.807 [2024-07-15 13:12:16.400431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.064 [2024-07-15 13:12:16.510811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.970 Running I/O for 10 seconds... 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.970 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.229 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3875831 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3875831 ']' 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3875831 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3875831 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3875831' 00:19:57.504 killing process with pid 3875831 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3875831 00:19:57.504 13:12:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3875831 00:19:57.504 [2024-07-15 13:12:18.986121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.504 [2024-07-15 13:12:18.986737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.986989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.987005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.987018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.987030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.987042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.987054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.987065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc841a0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.989987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.990214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86ba0 is same with the state(5) to be set 00:19:57.505 [2024-07-15 13:12:18.991971] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:57.505 [2024-07-15 13:12:18.992016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.992816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84640 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.993769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.993799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.993817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.993837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.993853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.993866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.993887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.993902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.993916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148d600 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.994004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.994025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.994041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.994055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.994069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.994083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.994097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.506 [2024-07-15 13:12:18.994111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.506 [2024-07-15 13:12:18.994123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f1830 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.995847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.995890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.995909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.995922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.995943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.995964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.995986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.506 [2024-07-15 13:12:18.996010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.996998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84ae0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.997539] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:57.507 [2024-07-15 13:12:18.999743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:18.999992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.507 [2024-07-15 13:12:19.000202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.000580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc84fa0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.001339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85440 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.001370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85440 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.001385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85440 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.001398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85440 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.001998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85900 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85da0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85da0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85da0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.002990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85da0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.003002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85da0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.003014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85da0 is same with the state(5) to be set 00:19:57.508 [2024-07-15 13:12:19.003032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85da0 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.509 [2024-07-15 13:12:19.003904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.003916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.003929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.003941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.003953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.003966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.003978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.003992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.004507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86240 is same with the state(5) to be set 00:19:57.510 [2024-07-15 13:12:19.014530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.014981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.510 [2024-07-15 13:12:19.015276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.510 [2024-07-15 13:12:19.015290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.015971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.015987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.511 [2024-07-15 13:12:19.016574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.511 [2024-07-15 13:12:19.016588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.016603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.512 [2024-07-15 13:12:19.016617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.016675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:57.512 [2024-07-15 13:12:19.016759] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x164eb10 was disconnected and freed. reset controller. 00:19:57.512 [2024-07-15 13:12:19.016921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.016945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.016961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.016975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.016990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bb350 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.017070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148d600 (9): Bad file descriptor 00:19:57.512 [2024-07-15 13:12:19.017124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5990 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.017308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef3610 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.017472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1414280 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.017622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f1830 (9): Bad file descriptor 00:19:57.512 [2024-07-15 13:12:19.017676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1413c60 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.017840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.017966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.017979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d450 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.018024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bd240 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.018210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.512 [2024-07-15 13:12:19.018315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.018328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5bb0 is same with the state(5) to be set 00:19:57.512 [2024-07-15 13:12:19.019311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.512 [2024-07-15 13:12:19.019338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.019360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.512 [2024-07-15 13:12:19.019376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.019393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.512 [2024-07-15 13:12:19.019407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.019424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.512 [2024-07-15 13:12:19.019437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.512 [2024-07-15 13:12:19.019454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.019977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.019993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.513 [2024-07-15 13:12:19.020577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.513 [2024-07-15 13:12:19.020590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.020970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.020987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.021315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.021415] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x164c390 was disconnected and freed. reset controller. 00:19:57.514 [2024-07-15 13:12:19.022963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.022989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.514 [2024-07-15 13:12:19.023570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.514 [2024-07-15 13:12:19.023586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.023973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.023988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.515 [2024-07-15 13:12:19.024890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.515 [2024-07-15 13:12:19.024906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.024922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.024937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.024953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.024967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.024982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.024996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025079] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16499f0 was disconnected and freed. reset controller. 00:19:57.516 [2024-07-15 13:12:19.025125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.025977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.025992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.516 [2024-07-15 13:12:19.026226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.516 [2024-07-15 13:12:19.026240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.026978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.026992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.027009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.027022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.027038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.027053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.027069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.027082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.027098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.027112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.027128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.027142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.027237] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x164aec0 was disconnected and freed. reset controller. 00:19:57.517 [2024-07-15 13:12:19.028575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:57.517 [2024-07-15 13:12:19.028630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:57.517 [2024-07-15 13:12:19.028661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5bb0 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.028685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bb350 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.028753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5990 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.028788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef3610 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.028813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1414280 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.028844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1413c60 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.028898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141d450 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.028930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bd240 (9): Bad file descriptor 00:19:57.517 [2024-07-15 13:12:19.029017] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:57.517 [2024-07-15 13:12:19.031815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:57.517 [2024-07-15 13:12:19.031850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:57.517 [2024-07-15 13:12:19.031960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.031986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.032010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.032026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.032044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.032058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.032075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.032089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.032105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.032120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.517 [2024-07-15 13:12:19.032136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.517 [2024-07-15 13:12:19.032151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.032976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.032993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.518 [2024-07-15 13:12:19.033251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.518 [2024-07-15 13:12:19.033265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.033977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.033991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.034006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485d70 is same with the state(5) to be set 00:19:57.519 [2024-07-15 13:12:19.035907] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:57.519 [2024-07-15 13:12:19.035990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.519 [2024-07-15 13:12:19.036552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.519 [2024-07-15 13:12:19.036567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.036984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.036999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.520 [2024-07-15 13:12:19.037898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.520 [2024-07-15 13:12:19.037914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.037928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.037945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.037959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.037975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.037989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.038004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.038018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.038034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.038048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.038064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.038078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.038092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164ffc0 is same with the state(5) to be set 00:19:57.521 [2024-07-15 13:12:19.039791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.521 [2024-07-15 13:12:19.039826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:57.521 [2024-07-15 13:12:19.040073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.521 [2024-07-15 13:12:19.040107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bb350 with addr=10.0.0.2, port=4420 00:19:57.521 [2024-07-15 13:12:19.040125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bb350 is same with the state(5) to be set 00:19:57.521 [2024-07-15 13:12:19.040279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.521 [2024-07-15 13:12:19.040306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b5bb0 with addr=10.0.0.2, port=4420 00:19:57.521 [2024-07-15 13:12:19.040321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5bb0 is same with the state(5) to be set 00:19:57.521 [2024-07-15 13:12:19.040564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.521 [2024-07-15 13:12:19.040590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1414280 with addr=10.0.0.2, port=4420 00:19:57.521 [2024-07-15 13:12:19.040606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1414280 is same with the state(5) to be set 00:19:57.521 [2024-07-15 13:12:19.040733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.521 [2024-07-15 13:12:19.040760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef3610 with addr=10.0.0.2, port=4420 00:19:57.521 [2024-07-15 13:12:19.040776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef3610 is same with the state(5) to be set 00:19:57.521 [2024-07-15 13:12:19.040963] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:57.521 [2024-07-15 13:12:19.041178] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:57.521 [2024-07-15 13:12:19.041984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.521 [2024-07-15 13:12:19.042015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f1830 with addr=10.0.0.2, port=4420 00:19:57.521 [2024-07-15 13:12:19.042031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f1830 is same with the state(5) to be set 00:19:57.521 [2024-07-15 13:12:19.042151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.521 [2024-07-15 13:12:19.042189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148d600 with addr=10.0.0.2, port=4420 00:19:57.521 [2024-07-15 13:12:19.042205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148d600 is same with the state(5) to be set 00:19:57.521 [2024-07-15 13:12:19.042228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bb350 (9): Bad file descriptor 00:19:57.521 [2024-07-15 13:12:19.042248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5bb0 (9): Bad file descriptor 00:19:57.521 [2024-07-15 13:12:19.042277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1414280 (9): Bad file descriptor 00:19:57.521 [2024-07-15 13:12:19.042294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef3610 (9): Bad file descriptor 00:19:57.521 [2024-07-15 13:12:19.042637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.042973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.042987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.521 [2024-07-15 13:12:19.043459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.521 [2024-07-15 13:12:19.043473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.043978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.043993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.522 [2024-07-15 13:12:19.044575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.522 [2024-07-15 13:12:19.044590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.044611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.044625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.044641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.044655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.044671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.044685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.044701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.044714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.044729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15402f0 is same with the state(5) to be set 00:19:57.523 [2024-07-15 13:12:19.046035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.046984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.046999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.047016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.047030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.047049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.047064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.047080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.047094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.047110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.047124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.047140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.047154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.047174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.047188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.523 [2024-07-15 13:12:19.047204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.523 [2024-07-15 13:12:19.047218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.047978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.047994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.048009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.048024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.048039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.048055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.048069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.048085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.048099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.048113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1541780 is same with the state(5) to be set 00:19:57.524 [2024-07-15 13:12:19.049364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.524 [2024-07-15 13:12:19.049794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.524 [2024-07-15 13:12:19.049811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.049826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.049842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.049856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.049889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.049909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.049926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.049943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.049960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.049974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.049991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.050971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.050986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.051002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.051031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.051047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.051063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.051077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.051097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.051113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.051129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.051143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.051159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.525 [2024-07-15 13:12:19.051182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.525 [2024-07-15 13:12:19.051199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.051434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.051448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ecb50 is same with the state(5) to be set 00:19:57.526 [2024-07-15 13:12:19.052729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.052753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.052779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.052796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.052812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.052826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.052843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.052857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.052892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.052909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.052926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.052941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.052957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.052971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.052988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.526 [2024-07-15 13:12:19.053632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.526 [2024-07-15 13:12:19.053658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.053978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.053995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.527 [2024-07-15 13:12:19.054760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.527 [2024-07-15 13:12:19.054776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d860 is same with the state(5) to be set 00:19:57.527 [2024-07-15 13:12:19.057297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:57.527 [2024-07-15 13:12:19.057334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:57.527 [2024-07-15 13:12:19.057354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:57.527 task offset: 19072 on job bdev=Nvme9n1 fails 00:19:57.527 00:19:57.527 Latency(us) 00:19:57.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.527 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.527 Job: Nvme1n1 ended in about 0.88 seconds with error 00:19:57.527 Verification LBA range: start 0x0 length 0x400 00:19:57.527 Nvme1n1 : 0.88 144.72 9.05 72.36 0.00 291441.02 22039.51 253211.69 00:19:57.527 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.527 Job: Nvme2n1 ended in about 0.90 seconds with error 00:19:57.527 Verification LBA range: start 0x0 length 0x400 00:19:57.527 Nvme2n1 : 0.90 147.46 9.22 71.49 0.00 283118.00 19709.35 254765.13 00:19:57.527 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.527 Job: Nvme3n1 ended in about 0.90 seconds with error 00:19:57.527 Verification LBA range: start 0x0 length 0x400 00:19:57.527 Nvme3n1 : 0.90 142.46 8.90 71.23 0.00 283975.43 24660.95 281173.71 00:19:57.527 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.527 Job: Nvme4n1 ended in about 0.90 seconds with error 00:19:57.527 Verification LBA range: start 0x0 length 0x400 00:19:57.527 Nvme4n1 : 0.90 217.34 13.58 70.97 0.00 205968.07 15243.19 250104.79 00:19:57.528 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.528 Job: Nvme5n1 ended in about 0.88 seconds with error 00:19:57.528 Verification LBA range: start 0x0 length 0x400 00:19:57.528 Nvme5n1 : 0.88 145.52 9.09 72.76 0.00 265700.95 10437.21 278066.82 00:19:57.528 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.528 Job: Nvme6n1 ended in about 0.88 seconds with error 00:19:57.528 Verification LBA range: start 0x0 length 0x400 00:19:57.528 Nvme6n1 : 0.88 218.01 13.63 72.67 0.00 194970.17 8932.31 251658.24 00:19:57.528 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.528 Job: Nvme7n1 ended in about 0.88 seconds with error 00:19:57.528 Verification LBA range: start 0x0 length 0x400 00:19:57.528 Nvme7n1 : 0.88 218.75 13.67 72.92 0.00 189675.33 13301.38 251658.24 00:19:57.528 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.528 Job: Nvme8n1 ended in about 0.91 seconds with error 00:19:57.528 Verification LBA range: start 0x0 length 0x400 00:19:57.528 Nvme8n1 : 0.91 141.41 8.84 70.70 0.00 256196.46 19709.35 245444.46 00:19:57.528 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.528 Job: Nvme9n1 ended in about 0.87 seconds with error 00:19:57.528 Verification LBA range: start 0x0 length 0x400 00:19:57.528 Nvme9n1 : 0.87 146.80 9.18 73.40 0.00 239356.21 7330.32 299815.06 00:19:57.528 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.528 Job: Nvme10n1 ended in about 0.89 seconds with error 00:19:57.528 Verification LBA range: start 0x0 length 0x400 00:19:57.528 Nvme10n1 : 0.89 144.06 9.00 72.03 0.00 239085.04 16990.81 251658.24 00:19:57.528 =================================================================================================================== 00:19:57.528 Total : 1666.52 104.16 720.53 0.00 240592.95 7330.32 299815.06 00:19:57.528 [2024-07-15 13:12:19.084915] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:57.528 [2024-07-15 13:12:19.084989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:57.528 [2024-07-15 13:12:19.085088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f1830 (9): Bad file descriptor 00:19:57.528 [2024-07-15 13:12:19.085117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148d600 (9): Bad file descriptor 00:19:57.528 [2024-07-15 13:12:19.085135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.085149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.085176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:57.528 [2024-07-15 13:12:19.085201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.085217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.085230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:57.528 [2024-07-15 13:12:19.085249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.085264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.085277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:57.528 [2024-07-15 13:12:19.085296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.085310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.085324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:57.528 [2024-07-15 13:12:19.085348] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.085372] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.085392] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.085422] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.085466] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.085490] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.085652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.085675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.085687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.085699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.085992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.528 [2024-07-15 13:12:19.086029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bd240 with addr=10.0.0.2, port=4420 00:19:57.528 [2024-07-15 13:12:19.086049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bd240 is same with the state(5) to be set 00:19:57.528 [2024-07-15 13:12:19.086193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.528 [2024-07-15 13:12:19.086220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141d450 with addr=10.0.0.2, port=4420 00:19:57.528 [2024-07-15 13:12:19.086260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d450 is same with the state(5) to be set 00:19:57.528 [2024-07-15 13:12:19.086383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.528 [2024-07-15 13:12:19.086412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413c60 with addr=10.0.0.2, port=4420 00:19:57.528 [2024-07-15 13:12:19.086428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1413c60 is same with the state(5) to be set 00:19:57.528 [2024-07-15 13:12:19.086554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.528 [2024-07-15 13:12:19.086580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b5990 with addr=10.0.0.2, port=4420 00:19:57.528 [2024-07-15 13:12:19.086596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5990 is same with the state(5) to be set 00:19:57.528 [2024-07-15 13:12:19.086611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.086624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.086641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.528 [2024-07-15 13:12:19.086660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.086675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.086688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:57.528 [2024-07-15 13:12:19.086743] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.086768] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.528 [2024-07-15 13:12:19.087987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.088012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.088047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bd240 (9): Bad file descriptor 00:19:57.528 [2024-07-15 13:12:19.088069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141d450 (9): Bad file descriptor 00:19:57.528 [2024-07-15 13:12:19.088087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1413c60 (9): Bad file descriptor 00:19:57.528 [2024-07-15 13:12:19.088104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5990 (9): Bad file descriptor 00:19:57.528 [2024-07-15 13:12:19.088177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:57.528 [2024-07-15 13:12:19.088214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:57.528 [2024-07-15 13:12:19.088232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:57.528 [2024-07-15 13:12:19.088248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:57.528 [2024-07-15 13:12:19.088290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.088306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.088320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:57.528 [2024-07-15 13:12:19.088337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.088351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.088370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:57.528 [2024-07-15 13:12:19.088388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.088402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.088415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:57.528 [2024-07-15 13:12:19.088430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:57.528 [2024-07-15 13:12:19.088444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:57.528 [2024-07-15 13:12:19.088461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:57.528 [2024-07-15 13:12:19.088527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.088550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.088562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.088574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.528 [2024-07-15 13:12:19.088807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.528 [2024-07-15 13:12:19.088833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef3610 with addr=10.0.0.2, port=4420 00:19:57.528 [2024-07-15 13:12:19.088849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef3610 is same with the state(5) to be set 00:19:57.528 [2024-07-15 13:12:19.089122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.528 [2024-07-15 13:12:19.089151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1414280 with addr=10.0.0.2, port=4420 00:19:57.529 [2024-07-15 13:12:19.089168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1414280 is same with the state(5) to be set 00:19:57.529 [2024-07-15 13:12:19.089307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.529 [2024-07-15 13:12:19.089333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b5bb0 with addr=10.0.0.2, port=4420 00:19:57.529 [2024-07-15 13:12:19.089349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b5bb0 is same with the state(5) to be set 00:19:57.529 [2024-07-15 13:12:19.089470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.529 [2024-07-15 13:12:19.089496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bb350 with addr=10.0.0.2, port=4420 00:19:57.529 [2024-07-15 13:12:19.089511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bb350 is same with the state(5) to be set 00:19:57.529 [2024-07-15 13:12:19.089555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef3610 (9): Bad file descriptor 00:19:57.529 [2024-07-15 13:12:19.089580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1414280 (9): Bad file descriptor 00:19:57.529 [2024-07-15 13:12:19.089599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b5bb0 (9): Bad file descriptor 00:19:57.529 [2024-07-15 13:12:19.089616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bb350 (9): Bad file descriptor 00:19:57.529 [2024-07-15 13:12:19.089666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:57.529 [2024-07-15 13:12:19.089684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:57.529 [2024-07-15 13:12:19.089698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:57.529 [2024-07-15 13:12:19.089721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:57.529 [2024-07-15 13:12:19.089736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:57.529 [2024-07-15 13:12:19.089750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:57.529 [2024-07-15 13:12:19.089765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:57.529 [2024-07-15 13:12:19.089779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:57.529 [2024-07-15 13:12:19.089792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:57.529 [2024-07-15 13:12:19.089807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:57.529 [2024-07-15 13:12:19.089821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:57.529 [2024-07-15 13:12:19.089834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:57.529 [2024-07-15 13:12:19.089891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.529 [2024-07-15 13:12:19.089911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.529 [2024-07-15 13:12:19.089924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.529 [2024-07-15 13:12:19.089935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.099 13:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:58.099 13:12:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3876022 00:19:59.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3876022) - No such process 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.037 rmmod nvme_tcp 00:19:59.037 rmmod nvme_fabrics 00:19:59.037 rmmod nvme_keyring 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.037 13:12:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.573 13:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.573 00:20:01.573 real 0m8.114s 00:20:01.573 user 0m20.755s 00:20:01.573 sys 0m1.502s 00:20:01.573 13:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.573 13:12:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:01.573 ************************************ 00:20:01.573 END TEST nvmf_shutdown_tc3 00:20:01.573 ************************************ 00:20:01.573 13:12:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:01.573 13:12:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:01.573 00:20:01.573 real 0m28.182s 00:20:01.573 user 1m19.710s 00:20:01.573 sys 0m6.288s 00:20:01.573 13:12:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.573 13:12:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:01.573 ************************************ 00:20:01.573 END TEST nvmf_shutdown 00:20:01.573 ************************************ 00:20:01.573 13:12:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:01.573 13:12:22 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:01.573 13:12:22 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.573 13:12:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.573 13:12:22 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:01.573 13:12:22 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.574 13:12:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.574 13:12:22 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:01.574 13:12:22 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:01.574 13:12:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:01.574 13:12:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.574 13:12:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.574 ************************************ 00:20:01.574 START TEST nvmf_multicontroller 00:20:01.574 ************************************ 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:01.574 * Looking for test storage... 00:20:01.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.574 13:12:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:03.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:03.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:03.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:03.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:20:03.513 00:20:03.513 --- 10.0.0.2 ping statistics --- 00:20:03.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.513 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:20:03.513 00:20:03.513 --- 10.0.0.1 ping statistics --- 00:20:03.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.513 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3878537 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3878537 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3878537 ']' 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.513 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.514 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.514 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.514 13:12:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.514 [2024-07-15 13:12:24.985999] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:03.514 [2024-07-15 13:12:24.986082] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.514 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.514 [2024-07-15 13:12:25.054814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.514 [2024-07-15 13:12:25.176985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.514 [2024-07-15 13:12:25.177062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.514 [2024-07-15 13:12:25.177078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.514 [2024-07-15 13:12:25.177091] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.514 [2024-07-15 13:12:25.177103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.514 [2024-07-15 13:12:25.177205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.514 [2024-07-15 13:12:25.178898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.514 [2024-07-15 13:12:25.178910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 [2024-07-15 13:12:25.326834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 Malloc0 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 [2024-07-15 13:12:25.381550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 [2024-07-15 13:12:25.389428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 Malloc1 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3878570 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3878570 /var/tmp/bdevperf.sock 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3878570 ']' 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.773 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.340 NVMe0n1 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.340 1 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.340 request: 00:20:04.340 { 00:20:04.340 "name": "NVMe0", 00:20:04.340 "trtype": "tcp", 00:20:04.340 "traddr": "10.0.0.2", 00:20:04.340 "adrfam": "ipv4", 00:20:04.340 "trsvcid": "4420", 00:20:04.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.340 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:04.340 "hostaddr": "10.0.0.2", 00:20:04.340 "hostsvcid": "60000", 00:20:04.340 "prchk_reftag": false, 00:20:04.340 "prchk_guard": false, 00:20:04.340 "hdgst": false, 00:20:04.340 "ddgst": false, 00:20:04.340 "method": "bdev_nvme_attach_controller", 00:20:04.340 "req_id": 1 00:20:04.340 } 00:20:04.340 Got JSON-RPC error response 00:20:04.340 response: 00:20:04.340 { 00:20:04.340 "code": -114, 00:20:04.340 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:04.340 } 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.340 request: 00:20:04.340 { 00:20:04.340 "name": "NVMe0", 00:20:04.340 "trtype": "tcp", 00:20:04.340 "traddr": "10.0.0.2", 00:20:04.340 "adrfam": "ipv4", 00:20:04.340 "trsvcid": "4420", 00:20:04.340 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:04.340 "hostaddr": "10.0.0.2", 00:20:04.340 "hostsvcid": "60000", 00:20:04.340 "prchk_reftag": false, 00:20:04.340 "prchk_guard": false, 00:20:04.340 "hdgst": false, 00:20:04.340 "ddgst": false, 00:20:04.340 "method": "bdev_nvme_attach_controller", 00:20:04.340 "req_id": 1 00:20:04.340 } 00:20:04.340 Got JSON-RPC error response 00:20:04.340 response: 00:20:04.340 { 00:20:04.340 "code": -114, 00:20:04.340 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:04.340 } 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.340 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.341 request: 00:20:04.341 { 00:20:04.341 "name": "NVMe0", 00:20:04.341 "trtype": "tcp", 00:20:04.341 "traddr": "10.0.0.2", 00:20:04.341 "adrfam": "ipv4", 00:20:04.341 "trsvcid": "4420", 00:20:04.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.341 "hostaddr": "10.0.0.2", 00:20:04.341 "hostsvcid": "60000", 00:20:04.341 "prchk_reftag": false, 00:20:04.341 "prchk_guard": false, 00:20:04.341 "hdgst": false, 00:20:04.341 "ddgst": false, 00:20:04.341 "multipath": "disable", 00:20:04.341 "method": "bdev_nvme_attach_controller", 00:20:04.341 "req_id": 1 00:20:04.341 } 00:20:04.341 Got JSON-RPC error response 00:20:04.341 response: 00:20:04.341 { 00:20:04.341 "code": -114, 00:20:04.341 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:04.341 } 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.341 request: 00:20:04.341 { 00:20:04.341 "name": "NVMe0", 00:20:04.341 "trtype": "tcp", 00:20:04.341 "traddr": "10.0.0.2", 00:20:04.341 "adrfam": "ipv4", 00:20:04.341 "trsvcid": "4420", 00:20:04.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.341 "hostaddr": "10.0.0.2", 00:20:04.341 "hostsvcid": "60000", 00:20:04.341 "prchk_reftag": false, 00:20:04.341 "prchk_guard": false, 00:20:04.341 "hdgst": false, 00:20:04.341 "ddgst": false, 00:20:04.341 "multipath": "failover", 00:20:04.341 "method": "bdev_nvme_attach_controller", 00:20:04.341 "req_id": 1 00:20:04.341 } 00:20:04.341 Got JSON-RPC error response 00:20:04.341 response: 00:20:04.341 { 00:20:04.341 "code": -114, 00:20:04.341 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:04.341 } 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.341 13:12:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.598 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.598 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:04.598 13:12:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.973 { 00:20:05.973 "core_count": 1, 00:20:05.973 "test_results": [ 00:20:05.973 { 00:20:05.973 "job": "NVMe0n1", 00:20:05.973 "test_status": "finished", 00:20:05.973 "core_mask": "0x1", 00:20:05.973 "workload": "write", 00:20:05.973 "queue_depth": 128, 00:20:05.973 "io_size": 4096, 00:20:05.973 "runtime": 1.0055010318756104, 00:20:05.973 "io_per_second": 19252.094229642735, 00:20:05.973 "MiB_per_second": 75.20349308454193, 00:20:05.973 "fails_per_second": 0.0, 00:20:05.973 "timeout_per_second": 0.0, 00:20:05.973 "average_latency_us": 6638.401727757306, 00:20:05.973 "min_latency_us": 4102.068148148148, 00:20:05.973 "max_latency_us": 16990.814814814814 00:20:05.973 } 00:20:05.973 ] 00:20:05.973 } 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3878570 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3878570 ']' 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3878570 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3878570 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3878570' 00:20:05.973 killing process with pid 3878570 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3878570 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3878570 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.973 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:06.232 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.232 [2024-07-15 13:12:25.493759] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:06.232 [2024-07-15 13:12:25.493866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878570 ] 00:20:06.232 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.232 [2024-07-15 13:12:25.553343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.232 [2024-07-15 13:12:25.661109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.232 [2024-07-15 13:12:26.240978] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e33fc80c-6239-4a0d-af2f-8155da6d29e9 already exists 00:20:06.232 [2024-07-15 13:12:26.241025] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e33fc80c-6239-4a0d-af2f-8155da6d29e9 alias for bdev NVMe1n1 00:20:06.232 [2024-07-15 13:12:26.241052] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:06.232 Running I/O for 1 seconds... 00:20:06.232 00:20:06.232 Latency(us) 00:20:06.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.232 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:06.232 NVMe0n1 : 1.01 19252.09 75.20 0.00 0.00 6638.40 4102.07 16990.81 00:20:06.232 =================================================================================================================== 00:20:06.232 Total : 19252.09 75.20 0.00 0.00 6638.40 4102.07 16990.81 00:20:06.232 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.232 00:20:06.232 Latency(us) 00:20:06.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.232 =================================================================================================================== 00:20:06.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.232 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.232 rmmod nvme_tcp 00:20:06.232 rmmod nvme_fabrics 00:20:06.232 rmmod nvme_keyring 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3878537 ']' 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3878537 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3878537 ']' 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3878537 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3878537 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3878537' 00:20:06.232 killing process with pid 3878537 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3878537 00:20:06.232 13:12:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3878537 00:20:06.490 13:12:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.491 13:12:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.491 13:12:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.491 13:12:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.491 13:12:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.491 13:12:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.491 13:12:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.491 13:12:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.400 13:12:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.659 00:20:08.659 real 0m7.344s 00:20:08.659 user 0m11.439s 00:20:08.659 sys 0m2.274s 00:20:08.659 13:12:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.659 13:12:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.659 ************************************ 00:20:08.659 END TEST nvmf_multicontroller 00:20:08.659 ************************************ 00:20:08.659 13:12:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:08.659 13:12:30 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:08.659 13:12:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:08.659 13:12:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.659 13:12:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:08.659 ************************************ 00:20:08.659 START TEST nvmf_aer 00:20:08.659 ************************************ 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:08.659 * Looking for test storage... 00:20:08.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:08.659 13:12:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.660 13:12:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:10.563 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:10.563 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:10.563 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:10.563 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.563 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:10.564 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:10.564 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.564 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.564 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.564 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.564 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:10.564 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:10.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:20:10.845 00:20:10.845 --- 10.0.0.2 ping statistics --- 00:20:10.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.845 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:20:10.845 00:20:10.845 --- 10.0.0.1 ping statistics --- 00:20:10.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.845 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3880773 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3880773 00:20:10.845 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3880773 ']' 00:20:10.846 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.846 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.846 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.846 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.846 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.846 [2024-07-15 13:12:32.380753] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:10.846 [2024-07-15 13:12:32.380833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.846 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.846 [2024-07-15 13:12:32.445392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.104 [2024-07-15 13:12:32.558359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.104 [2024-07-15 13:12:32.558428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.104 [2024-07-15 13:12:32.558441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.104 [2024-07-15 13:12:32.558452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.104 [2024-07-15 13:12:32.558461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.104 [2024-07-15 13:12:32.558540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.104 [2024-07-15 13:12:32.558605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.104 [2024-07-15 13:12:32.558669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.104 [2024-07-15 13:12:32.558672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 [2024-07-15 13:12:32.710544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 Malloc0 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 [2024-07-15 13:12:32.761645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 [ 00:20:11.104 { 00:20:11.104 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.104 "subtype": "Discovery", 00:20:11.104 "listen_addresses": [], 00:20:11.104 "allow_any_host": true, 00:20:11.104 "hosts": [] 00:20:11.104 }, 00:20:11.104 { 00:20:11.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.104 "subtype": "NVMe", 00:20:11.104 "listen_addresses": [ 00:20:11.104 { 00:20:11.104 "trtype": "TCP", 00:20:11.104 "adrfam": "IPv4", 00:20:11.104 "traddr": "10.0.0.2", 00:20:11.104 "trsvcid": "4420" 00:20:11.104 } 00:20:11.104 ], 00:20:11.104 "allow_any_host": true, 00:20:11.104 "hosts": [], 00:20:11.104 "serial_number": "SPDK00000000000001", 00:20:11.104 "model_number": "SPDK bdev Controller", 00:20:11.104 "max_namespaces": 2, 00:20:11.104 "min_cntlid": 1, 00:20:11.104 "max_cntlid": 65519, 00:20:11.104 "namespaces": [ 00:20:11.104 { 00:20:11.104 "nsid": 1, 00:20:11.104 "bdev_name": "Malloc0", 00:20:11.104 "name": "Malloc0", 00:20:11.104 "nguid": "5F91748C04C24977BF2DBF645761A09E", 00:20:11.104 "uuid": "5f91748c-04c2-4977-bf2d-bf645761a09e" 00:20:11.104 } 00:20:11.104 ] 00:20:11.104 } 00:20:11.104 ] 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3880873 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:11.104 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:11.361 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.361 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.361 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:11.361 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:11.361 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:11.361 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.361 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.361 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:11.362 13:12:32 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:11.362 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.362 13:12:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.362 Malloc1 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.362 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.362 [ 00:20:11.362 { 00:20:11.362 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.362 "subtype": "Discovery", 00:20:11.362 "listen_addresses": [], 00:20:11.362 "allow_any_host": true, 00:20:11.362 "hosts": [] 00:20:11.362 }, 00:20:11.362 { 00:20:11.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.362 "subtype": "NVMe", 00:20:11.362 "listen_addresses": [ 00:20:11.362 { 00:20:11.362 "trtype": "TCP", 00:20:11.362 "adrfam": "IPv4", 00:20:11.362 "traddr": "10.0.0.2", 00:20:11.362 "trsvcid": "4420" 00:20:11.362 } 00:20:11.362 ], 00:20:11.362 "allow_any_host": true, 00:20:11.362 "hosts": [], 00:20:11.362 "serial_number": "SPDK00000000000001", 00:20:11.362 "model_number": "SPDK bdev Controller", 00:20:11.362 "max_namespaces": 2, 00:20:11.362 "min_cntlid": 1, 00:20:11.362 "max_cntlid": 65519, 00:20:11.362 "namespaces": [ 00:20:11.362 { 00:20:11.362 "nsid": 1, 00:20:11.362 "bdev_name": "Malloc0", 00:20:11.362 "name": "Malloc0", 00:20:11.362 "nguid": "5F91748C04C24977BF2DBF645761A09E", 00:20:11.362 "uuid": "5f91748c-04c2-4977-bf2d-bf645761a09e" 00:20:11.362 }, 00:20:11.362 { 00:20:11.362 "nsid": 2, 00:20:11.362 "bdev_name": "Malloc1", 00:20:11.362 "name": "Malloc1", 00:20:11.362 "nguid": "E6191426965746F29F31635E2C8C0E4F", 00:20:11.362 "uuid": "e6191426-9657-46f2-9f31-635e2c8c0e4f" 00:20:11.619 } 00:20:11.619 ] 00:20:11.619 } 00:20:11.619 ] 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3880873 00:20:11.619 Asynchronous Event Request test 00:20:11.619 Attaching to 10.0.0.2 00:20:11.619 Attached to 10.0.0.2 00:20:11.619 Registering asynchronous event callbacks... 00:20:11.619 Starting namespace attribute notice tests for all controllers... 00:20:11.619 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:11.619 aer_cb - Changed Namespace 00:20:11.619 Cleaning up... 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.619 rmmod nvme_tcp 00:20:11.619 rmmod nvme_fabrics 00:20:11.619 rmmod nvme_keyring 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3880773 ']' 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3880773 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3880773 ']' 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3880773 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3880773 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3880773' 00:20:11.619 killing process with pid 3880773 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3880773 00:20:11.619 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3880773 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.877 13:12:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.805 13:12:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.805 00:20:13.805 real 0m5.349s 00:20:13.805 user 0m4.100s 00:20:13.805 sys 0m1.871s 00:20:14.063 13:12:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.063 13:12:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.063 ************************************ 00:20:14.063 END TEST nvmf_aer 00:20:14.063 ************************************ 00:20:14.063 13:12:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:14.063 13:12:35 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.063 13:12:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:14.063 13:12:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.063 13:12:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:14.063 ************************************ 00:20:14.063 START TEST nvmf_async_init 00:20:14.063 ************************************ 00:20:14.063 13:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:14.063 * Looking for test storage... 00:20:14.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4b6d7d16d42748dca043258685d45f27 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.064 13:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:15.962 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:15.962 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:15.962 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:15.962 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.962 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:15.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:20:15.963 00:20:15.963 --- 10.0.0.2 ping statistics --- 00:20:15.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.963 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:15.963 00:20:15.963 --- 10.0.0.1 ping statistics --- 00:20:15.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.963 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3882855 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3882855 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3882855 ']' 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.963 13:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.221 [2024-07-15 13:12:37.694955] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:16.221 [2024-07-15 13:12:37.695047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.221 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.221 [2024-07-15 13:12:37.763420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.221 [2024-07-15 13:12:37.879468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.221 [2024-07-15 13:12:37.879523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.221 [2024-07-15 13:12:37.879549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.221 [2024-07-15 13:12:37.879561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.221 [2024-07-15 13:12:37.879573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.221 [2024-07-15 13:12:37.879607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 [2024-07-15 13:12:38.665295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 null0 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4b6d7d16d42748dca043258685d45f27 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 [2024-07-15 13:12:38.705522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.153 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.411 nvme0n1 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.411 [ 00:20:17.411 { 00:20:17.411 "name": "nvme0n1", 00:20:17.411 "aliases": [ 00:20:17.411 "4b6d7d16-d427-48dc-a043-258685d45f27" 00:20:17.411 ], 00:20:17.411 "product_name": "NVMe disk", 00:20:17.411 "block_size": 512, 00:20:17.411 "num_blocks": 2097152, 00:20:17.411 "uuid": "4b6d7d16-d427-48dc-a043-258685d45f27", 00:20:17.411 "assigned_rate_limits": { 00:20:17.411 "rw_ios_per_sec": 0, 00:20:17.411 "rw_mbytes_per_sec": 0, 00:20:17.411 "r_mbytes_per_sec": 0, 00:20:17.411 "w_mbytes_per_sec": 0 00:20:17.411 }, 00:20:17.411 "claimed": false, 00:20:17.411 "zoned": false, 00:20:17.411 "supported_io_types": { 00:20:17.411 "read": true, 00:20:17.411 "write": true, 00:20:17.411 "unmap": false, 00:20:17.411 "flush": true, 00:20:17.411 "reset": true, 00:20:17.411 "nvme_admin": true, 00:20:17.411 "nvme_io": true, 00:20:17.411 "nvme_io_md": false, 00:20:17.411 "write_zeroes": true, 00:20:17.411 "zcopy": false, 00:20:17.411 "get_zone_info": false, 00:20:17.411 "zone_management": false, 00:20:17.411 "zone_append": false, 00:20:17.411 "compare": true, 00:20:17.411 "compare_and_write": true, 00:20:17.411 "abort": true, 00:20:17.411 "seek_hole": false, 00:20:17.411 "seek_data": false, 00:20:17.411 "copy": true, 00:20:17.411 "nvme_iov_md": false 00:20:17.411 }, 00:20:17.411 "memory_domains": [ 00:20:17.411 { 00:20:17.411 "dma_device_id": "system", 00:20:17.411 "dma_device_type": 1 00:20:17.411 } 00:20:17.411 ], 00:20:17.411 "driver_specific": { 00:20:17.411 "nvme": [ 00:20:17.411 { 00:20:17.411 "trid": { 00:20:17.411 "trtype": "TCP", 00:20:17.411 "adrfam": "IPv4", 00:20:17.411 "traddr": "10.0.0.2", 00:20:17.411 "trsvcid": "4420", 00:20:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:17.411 }, 00:20:17.411 "ctrlr_data": { 00:20:17.411 "cntlid": 1, 00:20:17.411 "vendor_id": "0x8086", 00:20:17.411 "model_number": "SPDK bdev Controller", 00:20:17.411 "serial_number": "00000000000000000000", 00:20:17.411 "firmware_revision": "24.09", 00:20:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.411 "oacs": { 00:20:17.411 "security": 0, 00:20:17.411 "format": 0, 00:20:17.411 "firmware": 0, 00:20:17.411 "ns_manage": 0 00:20:17.411 }, 00:20:17.411 "multi_ctrlr": true, 00:20:17.411 "ana_reporting": false 00:20:17.411 }, 00:20:17.411 "vs": { 00:20:17.411 "nvme_version": "1.3" 00:20:17.411 }, 00:20:17.411 "ns_data": { 00:20:17.411 "id": 1, 00:20:17.411 "can_share": true 00:20:17.411 } 00:20:17.411 } 00:20:17.411 ], 00:20:17.411 "mp_policy": "active_passive" 00:20:17.411 } 00:20:17.411 } 00:20:17.411 ] 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.411 13:12:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.411 [2024-07-15 13:12:38.958737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:17.411 [2024-07-15 13:12:38.958826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a8090 (9): Bad file descriptor 00:20:17.411 [2024-07-15 13:12:39.101037] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:17.411 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.411 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:17.411 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.411 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.669 [ 00:20:17.669 { 00:20:17.669 "name": "nvme0n1", 00:20:17.669 "aliases": [ 00:20:17.669 "4b6d7d16-d427-48dc-a043-258685d45f27" 00:20:17.669 ], 00:20:17.669 "product_name": "NVMe disk", 00:20:17.669 "block_size": 512, 00:20:17.669 "num_blocks": 2097152, 00:20:17.669 "uuid": "4b6d7d16-d427-48dc-a043-258685d45f27", 00:20:17.669 "assigned_rate_limits": { 00:20:17.669 "rw_ios_per_sec": 0, 00:20:17.669 "rw_mbytes_per_sec": 0, 00:20:17.669 "r_mbytes_per_sec": 0, 00:20:17.669 "w_mbytes_per_sec": 0 00:20:17.670 }, 00:20:17.670 "claimed": false, 00:20:17.670 "zoned": false, 00:20:17.670 "supported_io_types": { 00:20:17.670 "read": true, 00:20:17.670 "write": true, 00:20:17.670 "unmap": false, 00:20:17.670 "flush": true, 00:20:17.670 "reset": true, 00:20:17.670 "nvme_admin": true, 00:20:17.670 "nvme_io": true, 00:20:17.670 "nvme_io_md": false, 00:20:17.670 "write_zeroes": true, 00:20:17.670 "zcopy": false, 00:20:17.670 "get_zone_info": false, 00:20:17.670 "zone_management": false, 00:20:17.670 "zone_append": false, 00:20:17.670 "compare": true, 00:20:17.670 "compare_and_write": true, 00:20:17.670 "abort": true, 00:20:17.670 "seek_hole": false, 00:20:17.670 "seek_data": false, 00:20:17.670 "copy": true, 00:20:17.670 "nvme_iov_md": false 00:20:17.670 }, 00:20:17.670 "memory_domains": [ 00:20:17.670 { 00:20:17.670 "dma_device_id": "system", 00:20:17.670 "dma_device_type": 1 00:20:17.670 } 00:20:17.670 ], 00:20:17.670 "driver_specific": { 00:20:17.670 "nvme": [ 00:20:17.670 { 00:20:17.670 "trid": { 00:20:17.670 "trtype": "TCP", 00:20:17.670 "adrfam": "IPv4", 00:20:17.670 "traddr": "10.0.0.2", 00:20:17.670 "trsvcid": "4420", 00:20:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:17.670 }, 00:20:17.670 "ctrlr_data": { 00:20:17.670 "cntlid": 2, 00:20:17.670 "vendor_id": "0x8086", 00:20:17.670 "model_number": "SPDK bdev Controller", 00:20:17.670 "serial_number": "00000000000000000000", 00:20:17.670 "firmware_revision": "24.09", 00:20:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.670 "oacs": { 00:20:17.670 "security": 0, 00:20:17.670 "format": 0, 00:20:17.670 "firmware": 0, 00:20:17.670 "ns_manage": 0 00:20:17.670 }, 00:20:17.670 "multi_ctrlr": true, 00:20:17.670 "ana_reporting": false 00:20:17.670 }, 00:20:17.670 "vs": { 00:20:17.670 "nvme_version": "1.3" 00:20:17.670 }, 00:20:17.670 "ns_data": { 00:20:17.670 "id": 1, 00:20:17.670 "can_share": true 00:20:17.670 } 00:20:17.670 } 00:20:17.670 ], 00:20:17.670 "mp_policy": "active_passive" 00:20:17.670 } 00:20:17.670 } 00:20:17.670 ] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dcrmn0TUSq 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dcrmn0TUSq 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 [2024-07-15 13:12:39.151405] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.670 [2024-07-15 13:12:39.151529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dcrmn0TUSq 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 [2024-07-15 13:12:39.159429] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dcrmn0TUSq 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 [2024-07-15 13:12:39.167458] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.670 [2024-07-15 13:12:39.167517] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:17.670 nvme0n1 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 [ 00:20:17.670 { 00:20:17.670 "name": "nvme0n1", 00:20:17.670 "aliases": [ 00:20:17.670 "4b6d7d16-d427-48dc-a043-258685d45f27" 00:20:17.670 ], 00:20:17.670 "product_name": "NVMe disk", 00:20:17.670 "block_size": 512, 00:20:17.670 "num_blocks": 2097152, 00:20:17.670 "uuid": "4b6d7d16-d427-48dc-a043-258685d45f27", 00:20:17.670 "assigned_rate_limits": { 00:20:17.670 "rw_ios_per_sec": 0, 00:20:17.670 "rw_mbytes_per_sec": 0, 00:20:17.670 "r_mbytes_per_sec": 0, 00:20:17.670 "w_mbytes_per_sec": 0 00:20:17.670 }, 00:20:17.670 "claimed": false, 00:20:17.670 "zoned": false, 00:20:17.670 "supported_io_types": { 00:20:17.670 "read": true, 00:20:17.670 "write": true, 00:20:17.670 "unmap": false, 00:20:17.670 "flush": true, 00:20:17.670 "reset": true, 00:20:17.670 "nvme_admin": true, 00:20:17.670 "nvme_io": true, 00:20:17.670 "nvme_io_md": false, 00:20:17.670 "write_zeroes": true, 00:20:17.670 "zcopy": false, 00:20:17.670 "get_zone_info": false, 00:20:17.670 "zone_management": false, 00:20:17.670 "zone_append": false, 00:20:17.670 "compare": true, 00:20:17.670 "compare_and_write": true, 00:20:17.670 "abort": true, 00:20:17.670 "seek_hole": false, 00:20:17.670 "seek_data": false, 00:20:17.670 "copy": true, 00:20:17.670 "nvme_iov_md": false 00:20:17.670 }, 00:20:17.670 "memory_domains": [ 00:20:17.670 { 00:20:17.670 "dma_device_id": "system", 00:20:17.670 "dma_device_type": 1 00:20:17.670 } 00:20:17.670 ], 00:20:17.670 "driver_specific": { 00:20:17.670 "nvme": [ 00:20:17.670 { 00:20:17.670 "trid": { 00:20:17.670 "trtype": "TCP", 00:20:17.670 "adrfam": "IPv4", 00:20:17.670 "traddr": "10.0.0.2", 00:20:17.670 "trsvcid": "4421", 00:20:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:17.670 }, 00:20:17.670 "ctrlr_data": { 00:20:17.670 "cntlid": 3, 00:20:17.670 "vendor_id": "0x8086", 00:20:17.670 "model_number": "SPDK bdev Controller", 00:20:17.670 "serial_number": "00000000000000000000", 00:20:17.670 "firmware_revision": "24.09", 00:20:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.670 "oacs": { 00:20:17.670 "security": 0, 00:20:17.670 "format": 0, 00:20:17.670 "firmware": 0, 00:20:17.670 "ns_manage": 0 00:20:17.670 }, 00:20:17.670 "multi_ctrlr": true, 00:20:17.670 "ana_reporting": false 00:20:17.670 }, 00:20:17.670 "vs": { 00:20:17.670 "nvme_version": "1.3" 00:20:17.670 }, 00:20:17.670 "ns_data": { 00:20:17.670 "id": 1, 00:20:17.670 "can_share": true 00:20:17.670 } 00:20:17.670 } 00:20:17.670 ], 00:20:17.670 "mp_policy": "active_passive" 00:20:17.670 } 00:20:17.670 } 00:20:17.670 ] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.dcrmn0TUSq 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.670 rmmod nvme_tcp 00:20:17.670 rmmod nvme_fabrics 00:20:17.670 rmmod nvme_keyring 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3882855 ']' 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3882855 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3882855 ']' 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3882855 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3882855 00:20:17.670 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.671 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.671 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3882855' 00:20:17.671 killing process with pid 3882855 00:20:17.671 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3882855 00:20:17.671 [2024-07-15 13:12:39.352092] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:17.671 [2024-07-15 13:12:39.352139] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.671 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3882855 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.929 13:12:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.509 13:12:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:20.509 00:20:20.509 real 0m6.094s 00:20:20.509 user 0m2.949s 00:20:20.509 sys 0m1.757s 00:20:20.509 13:12:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:20.509 13:12:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 ************************************ 00:20:20.509 END TEST nvmf_async_init 00:20:20.509 ************************************ 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:20.509 13:12:41 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 ************************************ 00:20:20.509 START TEST dma 00:20:20.509 ************************************ 00:20:20.509 13:12:41 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.509 * Looking for test storage... 00:20:20.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:20.509 13:12:41 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.509 13:12:41 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.509 13:12:41 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.509 13:12:41 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.509 13:12:41 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.509 13:12:41 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.509 13:12:41 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.509 13:12:41 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:20.509 13:12:41 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.509 13:12:41 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.509 13:12:41 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:20.509 13:12:41 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:20.509 00:20:20.509 real 0m0.065s 00:20:20.509 user 0m0.025s 00:20:20.509 sys 0m0.045s 00:20:20.509 13:12:41 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:20.509 13:12:41 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 ************************************ 00:20:20.509 END TEST dma 00:20:20.509 ************************************ 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:20.509 13:12:41 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.509 13:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 ************************************ 00:20:20.509 START TEST nvmf_identify 00:20:20.509 ************************************ 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.509 * Looking for test storage... 00:20:20.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.509 13:12:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:20.510 13:12:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:22.425 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:22.425 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:22.425 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.425 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:22.426 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:22.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:20:22.426 00:20:22.426 --- 10.0.0.2 ping statistics --- 00:20:22.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.426 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:20:22.426 00:20:22.426 --- 10.0.0.1 ping statistics --- 00:20:22.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.426 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3884992 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3884992 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3884992 ']' 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.426 13:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.426 [2024-07-15 13:12:43.959577] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:22.426 [2024-07-15 13:12:43.959666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.426 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.426 [2024-07-15 13:12:44.021616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.683 [2024-07-15 13:12:44.138944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.683 [2024-07-15 13:12:44.138996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.683 [2024-07-15 13:12:44.139023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.683 [2024-07-15 13:12:44.139035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.683 [2024-07-15 13:12:44.139047] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.683 [2024-07-15 13:12:44.139104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.683 [2024-07-15 13:12:44.139156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.683 [2024-07-15 13:12:44.139272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.683 [2024-07-15 13:12:44.139274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.246 [2024-07-15 13:12:44.929815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.246 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.504 Malloc0 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.504 13:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.504 [2024-07-15 13:12:45.006967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.504 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.504 [ 00:20:23.504 { 00:20:23.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:23.504 "subtype": "Discovery", 00:20:23.504 "listen_addresses": [ 00:20:23.504 { 00:20:23.504 "trtype": "TCP", 00:20:23.504 "adrfam": "IPv4", 00:20:23.504 "traddr": "10.0.0.2", 00:20:23.504 "trsvcid": "4420" 00:20:23.504 } 00:20:23.504 ], 00:20:23.504 "allow_any_host": true, 00:20:23.504 "hosts": [] 00:20:23.504 }, 00:20:23.504 { 00:20:23.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.505 "subtype": "NVMe", 00:20:23.505 "listen_addresses": [ 00:20:23.505 { 00:20:23.505 "trtype": "TCP", 00:20:23.505 "adrfam": "IPv4", 00:20:23.505 "traddr": "10.0.0.2", 00:20:23.505 "trsvcid": "4420" 00:20:23.505 } 00:20:23.505 ], 00:20:23.505 "allow_any_host": true, 00:20:23.505 "hosts": [], 00:20:23.505 "serial_number": "SPDK00000000000001", 00:20:23.505 "model_number": "SPDK bdev Controller", 00:20:23.505 "max_namespaces": 32, 00:20:23.505 "min_cntlid": 1, 00:20:23.505 "max_cntlid": 65519, 00:20:23.505 "namespaces": [ 00:20:23.505 { 00:20:23.505 "nsid": 1, 00:20:23.505 "bdev_name": "Malloc0", 00:20:23.505 "name": "Malloc0", 00:20:23.505 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:23.505 "eui64": "ABCDEF0123456789", 00:20:23.505 "uuid": "aab005a3-1915-4d7b-8a26-8b6e308bc788" 00:20:23.505 } 00:20:23.505 ] 00:20:23.505 } 00:20:23.505 ] 00:20:23.505 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.505 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:23.505 [2024-07-15 13:12:45.049376] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:23.505 [2024-07-15 13:12:45.049421] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885143 ] 00:20:23.505 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.505 [2024-07-15 13:12:45.082227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:23.505 [2024-07-15 13:12:45.082284] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.505 [2024-07-15 13:12:45.082294] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.505 [2024-07-15 13:12:45.082308] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.505 [2024-07-15 13:12:45.082318] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.505 [2024-07-15 13:12:45.085942] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:23.505 [2024-07-15 13:12:45.086013] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x782540 0 00:20:23.505 [2024-07-15 13:12:45.092904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.505 [2024-07-15 13:12:45.092929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.505 [2024-07-15 13:12:45.092938] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.505 [2024-07-15 13:12:45.092944] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.505 [2024-07-15 13:12:45.093009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.093022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.093030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.505 [2024-07-15 13:12:45.093047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.505 [2024-07-15 13:12:45.093073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.505 [2024-07-15 13:12:45.100895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.505 [2024-07-15 13:12:45.100912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.505 [2024-07-15 13:12:45.100919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.100927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.505 [2024-07-15 13:12:45.100942] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.505 [2024-07-15 13:12:45.100968] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:23.505 [2024-07-15 13:12:45.100977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:23.505 [2024-07-15 13:12:45.100998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.505 [2024-07-15 13:12:45.101026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.505 [2024-07-15 13:12:45.101049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.505 [2024-07-15 13:12:45.101190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.505 [2024-07-15 13:12:45.101206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.505 [2024-07-15 13:12:45.101213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.505 [2024-07-15 13:12:45.101229] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:23.505 [2024-07-15 13:12:45.101242] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:23.505 [2024-07-15 13:12:45.101254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.505 [2024-07-15 13:12:45.101279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.505 [2024-07-15 13:12:45.101300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.505 [2024-07-15 13:12:45.101429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.505 [2024-07-15 13:12:45.101444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.505 [2024-07-15 13:12:45.101451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.505 [2024-07-15 13:12:45.101466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:23.505 [2024-07-15 13:12:45.101486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.505 [2024-07-15 13:12:45.101499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.505 [2024-07-15 13:12:45.101523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.505 [2024-07-15 13:12:45.101544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.505 [2024-07-15 13:12:45.101662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.505 [2024-07-15 13:12:45.101677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.505 [2024-07-15 13:12:45.101684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.505 [2024-07-15 13:12:45.101699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.505 [2024-07-15 13:12:45.101716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.505 [2024-07-15 13:12:45.101742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.505 [2024-07-15 13:12:45.101762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.505 [2024-07-15 13:12:45.101897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.505 [2024-07-15 13:12:45.101913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.505 [2024-07-15 13:12:45.101920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.101926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.505 [2024-07-15 13:12:45.101934] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:23.505 [2024-07-15 13:12:45.101943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:23.505 [2024-07-15 13:12:45.101956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.505 [2024-07-15 13:12:45.102065] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:23.505 [2024-07-15 13:12:45.102073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.505 [2024-07-15 13:12:45.102087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.102094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.505 [2024-07-15 13:12:45.102100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.505 [2024-07-15 13:12:45.102111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.506 [2024-07-15 13:12:45.102132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.506 [2024-07-15 13:12:45.102265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.506 [2024-07-15 13:12:45.102277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.506 [2024-07-15 13:12:45.102288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.506 [2024-07-15 13:12:45.102303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.506 [2024-07-15 13:12:45.102319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.102345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.506 [2024-07-15 13:12:45.102365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.506 [2024-07-15 13:12:45.102476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.506 [2024-07-15 13:12:45.102488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.506 [2024-07-15 13:12:45.102495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.506 [2024-07-15 13:12:45.102509] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.506 [2024-07-15 13:12:45.102517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:23.506 [2024-07-15 13:12:45.102530] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:23.506 [2024-07-15 13:12:45.102544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.506 [2024-07-15 13:12:45.102560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.102578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.506 [2024-07-15 13:12:45.102598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.506 [2024-07-15 13:12:45.102767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.506 [2024-07-15 13:12:45.102783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.506 [2024-07-15 13:12:45.102790] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102796] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x782540): datao=0, datal=4096, cccid=0 00:20:23.506 [2024-07-15 13:12:45.102804] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e23c0) on tqpair(0x782540): expected_datao=0, payload_size=4096 00:20:23.506 [2024-07-15 13:12:45.102812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102829] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.102839] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.143888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.506 [2024-07-15 13:12:45.143907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.506 [2024-07-15 13:12:45.143914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.143921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.506 [2024-07-15 13:12:45.143933] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:23.506 [2024-07-15 13:12:45.143948] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:23.506 [2024-07-15 13:12:45.143960] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:23.506 [2024-07-15 13:12:45.143969] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:23.506 [2024-07-15 13:12:45.143977] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:23.506 [2024-07-15 13:12:45.143985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:23.506 [2024-07-15 13:12:45.144000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.506 [2024-07-15 13:12:45.144028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.144055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.506 [2024-07-15 13:12:45.144078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.506 [2024-07-15 13:12:45.144216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.506 [2024-07-15 13:12:45.144229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.506 [2024-07-15 13:12:45.144235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.506 [2024-07-15 13:12:45.144253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.144278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.506 [2024-07-15 13:12:45.144288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.144311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.506 [2024-07-15 13:12:45.144320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.144343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.506 [2024-07-15 13:12:45.144352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.144375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.506 [2024-07-15 13:12:45.144384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.506 [2024-07-15 13:12:45.144403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.506 [2024-07-15 13:12:45.144419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x782540) 00:20:23.506 [2024-07-15 13:12:45.144437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.506 [2024-07-15 13:12:45.144475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e23c0, cid 0, qid 0 00:20:23.506 [2024-07-15 13:12:45.144487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2540, cid 1, qid 0 00:20:23.506 [2024-07-15 13:12:45.144494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e26c0, cid 2, qid 0 00:20:23.506 [2024-07-15 13:12:45.144501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.506 [2024-07-15 13:12:45.144509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e29c0, cid 4, qid 0 00:20:23.506 [2024-07-15 13:12:45.144855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.506 [2024-07-15 13:12:45.144867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.506 [2024-07-15 13:12:45.144874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e29c0) on tqpair=0x782540 00:20:23.506 [2024-07-15 13:12:45.144902] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:23.506 [2024-07-15 13:12:45.144911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:23.506 [2024-07-15 13:12:45.144929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.506 [2024-07-15 13:12:45.144938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x782540) 00:20:23.507 [2024-07-15 13:12:45.144949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.507 [2024-07-15 13:12:45.144970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e29c0, cid 4, qid 0 00:20:23.507 [2024-07-15 13:12:45.145140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.507 [2024-07-15 13:12:45.145153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.507 [2024-07-15 13:12:45.145159] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145166] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x782540): datao=0, datal=4096, cccid=4 00:20:23.507 [2024-07-15 13:12:45.145173] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e29c0) on tqpair(0x782540): expected_datao=0, payload_size=4096 00:20:23.507 [2024-07-15 13:12:45.145180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145191] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145198] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.507 [2024-07-15 13:12:45.145232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.507 [2024-07-15 13:12:45.145238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e29c0) on tqpair=0x782540 00:20:23.507 [2024-07-15 13:12:45.145262] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:23.507 [2024-07-15 13:12:45.145299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x782540) 00:20:23.507 [2024-07-15 13:12:45.145320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.507 [2024-07-15 13:12:45.145336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x782540) 00:20:23.507 [2024-07-15 13:12:45.145359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.507 [2024-07-15 13:12:45.145385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e29c0, cid 4, qid 0 00:20:23.507 [2024-07-15 13:12:45.145397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2b40, cid 5, qid 0 00:20:23.507 [2024-07-15 13:12:45.145589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.507 [2024-07-15 13:12:45.145604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.507 [2024-07-15 13:12:45.145611] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145618] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x782540): datao=0, datal=1024, cccid=4 00:20:23.507 [2024-07-15 13:12:45.145625] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e29c0) on tqpair(0x782540): expected_datao=0, payload_size=1024 00:20:23.507 [2024-07-15 13:12:45.145632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145642] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145650] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.507 [2024-07-15 13:12:45.145667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.507 [2024-07-15 13:12:45.145674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.145680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2b40) on tqpair=0x782540 00:20:23.507 [2024-07-15 13:12:45.185990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.507 [2024-07-15 13:12:45.186009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.507 [2024-07-15 13:12:45.186016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e29c0) on tqpair=0x782540 00:20:23.507 [2024-07-15 13:12:45.186041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x782540) 00:20:23.507 [2024-07-15 13:12:45.186062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.507 [2024-07-15 13:12:45.186091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e29c0, cid 4, qid 0 00:20:23.507 [2024-07-15 13:12:45.186235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.507 [2024-07-15 13:12:45.186251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.507 [2024-07-15 13:12:45.186257] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186264] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x782540): datao=0, datal=3072, cccid=4 00:20:23.507 [2024-07-15 13:12:45.186272] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e29c0) on tqpair(0x782540): expected_datao=0, payload_size=3072 00:20:23.507 [2024-07-15 13:12:45.186279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186289] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186297] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.507 [2024-07-15 13:12:45.186328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.507 [2024-07-15 13:12:45.186338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e29c0) on tqpair=0x782540 00:20:23.507 [2024-07-15 13:12:45.186378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x782540) 00:20:23.507 [2024-07-15 13:12:45.186398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.507 [2024-07-15 13:12:45.186428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e29c0, cid 4, qid 0 00:20:23.507 [2024-07-15 13:12:45.186566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.507 [2024-07-15 13:12:45.186581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.507 [2024-07-15 13:12:45.186588] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186594] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x782540): datao=0, datal=8, cccid=4 00:20:23.507 [2024-07-15 13:12:45.186602] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7e29c0) on tqpair(0x782540): expected_datao=0, payload_size=8 00:20:23.507 [2024-07-15 13:12:45.186609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186619] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.507 [2024-07-15 13:12:45.186626] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.769 [2024-07-15 13:12:45.230892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.769 [2024-07-15 13:12:45.230910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.769 [2024-07-15 13:12:45.230917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.769 [2024-07-15 13:12:45.230939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e29c0) on tqpair=0x782540 00:20:23.770 ===================================================== 00:20:23.770 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:23.770 ===================================================== 00:20:23.770 Controller Capabilities/Features 00:20:23.770 ================================ 00:20:23.770 Vendor ID: 0000 00:20:23.770 Subsystem Vendor ID: 0000 00:20:23.770 Serial Number: .................... 00:20:23.770 Model Number: ........................................ 00:20:23.770 Firmware Version: 24.09 00:20:23.770 Recommended Arb Burst: 0 00:20:23.770 IEEE OUI Identifier: 00 00 00 00:20:23.770 Multi-path I/O 00:20:23.770 May have multiple subsystem ports: No 00:20:23.770 May have multiple controllers: No 00:20:23.770 Associated with SR-IOV VF: No 00:20:23.770 Max Data Transfer Size: 131072 00:20:23.770 Max Number of Namespaces: 0 00:20:23.770 Max Number of I/O Queues: 1024 00:20:23.770 NVMe Specification Version (VS): 1.3 00:20:23.770 NVMe Specification Version (Identify): 1.3 00:20:23.770 Maximum Queue Entries: 128 00:20:23.770 Contiguous Queues Required: Yes 00:20:23.770 Arbitration Mechanisms Supported 00:20:23.770 Weighted Round Robin: Not Supported 00:20:23.770 Vendor Specific: Not Supported 00:20:23.770 Reset Timeout: 15000 ms 00:20:23.770 Doorbell Stride: 4 bytes 00:20:23.770 NVM Subsystem Reset: Not Supported 00:20:23.770 Command Sets Supported 00:20:23.770 NVM Command Set: Supported 00:20:23.770 Boot Partition: Not Supported 00:20:23.770 Memory Page Size Minimum: 4096 bytes 00:20:23.770 Memory Page Size Maximum: 4096 bytes 00:20:23.770 Persistent Memory Region: Not Supported 00:20:23.770 Optional Asynchronous Events Supported 00:20:23.770 Namespace Attribute Notices: Not Supported 00:20:23.770 Firmware Activation Notices: Not Supported 00:20:23.770 ANA Change Notices: Not Supported 00:20:23.770 PLE Aggregate Log Change Notices: Not Supported 00:20:23.770 LBA Status Info Alert Notices: Not Supported 00:20:23.770 EGE Aggregate Log Change Notices: Not Supported 00:20:23.770 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.770 Zone Descriptor Change Notices: Not Supported 00:20:23.770 Discovery Log Change Notices: Supported 00:20:23.770 Controller Attributes 00:20:23.770 128-bit Host Identifier: Not Supported 00:20:23.770 Non-Operational Permissive Mode: Not Supported 00:20:23.770 NVM Sets: Not Supported 00:20:23.770 Read Recovery Levels: Not Supported 00:20:23.770 Endurance Groups: Not Supported 00:20:23.770 Predictable Latency Mode: Not Supported 00:20:23.770 Traffic Based Keep ALive: Not Supported 00:20:23.770 Namespace Granularity: Not Supported 00:20:23.770 SQ Associations: Not Supported 00:20:23.770 UUID List: Not Supported 00:20:23.770 Multi-Domain Subsystem: Not Supported 00:20:23.770 Fixed Capacity Management: Not Supported 00:20:23.770 Variable Capacity Management: Not Supported 00:20:23.770 Delete Endurance Group: Not Supported 00:20:23.770 Delete NVM Set: Not Supported 00:20:23.770 Extended LBA Formats Supported: Not Supported 00:20:23.770 Flexible Data Placement Supported: Not Supported 00:20:23.770 00:20:23.770 Controller Memory Buffer Support 00:20:23.770 ================================ 00:20:23.770 Supported: No 00:20:23.770 00:20:23.770 Persistent Memory Region Support 00:20:23.770 ================================ 00:20:23.770 Supported: No 00:20:23.770 00:20:23.770 Admin Command Set Attributes 00:20:23.770 ============================ 00:20:23.770 Security Send/Receive: Not Supported 00:20:23.770 Format NVM: Not Supported 00:20:23.770 Firmware Activate/Download: Not Supported 00:20:23.770 Namespace Management: Not Supported 00:20:23.770 Device Self-Test: Not Supported 00:20:23.770 Directives: Not Supported 00:20:23.770 NVMe-MI: Not Supported 00:20:23.770 Virtualization Management: Not Supported 00:20:23.770 Doorbell Buffer Config: Not Supported 00:20:23.770 Get LBA Status Capability: Not Supported 00:20:23.770 Command & Feature Lockdown Capability: Not Supported 00:20:23.770 Abort Command Limit: 1 00:20:23.770 Async Event Request Limit: 4 00:20:23.770 Number of Firmware Slots: N/A 00:20:23.770 Firmware Slot 1 Read-Only: N/A 00:20:23.770 Firmware Activation Without Reset: N/A 00:20:23.770 Multiple Update Detection Support: N/A 00:20:23.770 Firmware Update Granularity: No Information Provided 00:20:23.770 Per-Namespace SMART Log: No 00:20:23.770 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.770 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:23.770 Command Effects Log Page: Not Supported 00:20:23.770 Get Log Page Extended Data: Supported 00:20:23.770 Telemetry Log Pages: Not Supported 00:20:23.770 Persistent Event Log Pages: Not Supported 00:20:23.770 Supported Log Pages Log Page: May Support 00:20:23.770 Commands Supported & Effects Log Page: Not Supported 00:20:23.770 Feature Identifiers & Effects Log Page:May Support 00:20:23.770 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.770 Data Area 4 for Telemetry Log: Not Supported 00:20:23.770 Error Log Page Entries Supported: 128 00:20:23.770 Keep Alive: Not Supported 00:20:23.770 00:20:23.770 NVM Command Set Attributes 00:20:23.770 ========================== 00:20:23.770 Submission Queue Entry Size 00:20:23.770 Max: 1 00:20:23.770 Min: 1 00:20:23.770 Completion Queue Entry Size 00:20:23.770 Max: 1 00:20:23.770 Min: 1 00:20:23.770 Number of Namespaces: 0 00:20:23.770 Compare Command: Not Supported 00:20:23.770 Write Uncorrectable Command: Not Supported 00:20:23.770 Dataset Management Command: Not Supported 00:20:23.770 Write Zeroes Command: Not Supported 00:20:23.770 Set Features Save Field: Not Supported 00:20:23.770 Reservations: Not Supported 00:20:23.770 Timestamp: Not Supported 00:20:23.770 Copy: Not Supported 00:20:23.770 Volatile Write Cache: Not Present 00:20:23.770 Atomic Write Unit (Normal): 1 00:20:23.770 Atomic Write Unit (PFail): 1 00:20:23.770 Atomic Compare & Write Unit: 1 00:20:23.770 Fused Compare & Write: Supported 00:20:23.770 Scatter-Gather List 00:20:23.770 SGL Command Set: Supported 00:20:23.770 SGL Keyed: Supported 00:20:23.770 SGL Bit Bucket Descriptor: Not Supported 00:20:23.770 SGL Metadata Pointer: Not Supported 00:20:23.770 Oversized SGL: Not Supported 00:20:23.770 SGL Metadata Address: Not Supported 00:20:23.770 SGL Offset: Supported 00:20:23.770 Transport SGL Data Block: Not Supported 00:20:23.770 Replay Protected Memory Block: Not Supported 00:20:23.770 00:20:23.770 Firmware Slot Information 00:20:23.770 ========================= 00:20:23.770 Active slot: 0 00:20:23.770 00:20:23.770 00:20:23.770 Error Log 00:20:23.770 ========= 00:20:23.770 00:20:23.770 Active Namespaces 00:20:23.770 ================= 00:20:23.770 Discovery Log Page 00:20:23.770 ================== 00:20:23.770 Generation Counter: 2 00:20:23.770 Number of Records: 2 00:20:23.770 Record Format: 0 00:20:23.770 00:20:23.770 Discovery Log Entry 0 00:20:23.770 ---------------------- 00:20:23.770 Transport Type: 3 (TCP) 00:20:23.770 Address Family: 1 (IPv4) 00:20:23.770 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:23.770 Entry Flags: 00:20:23.770 Duplicate Returned Information: 1 00:20:23.770 Explicit Persistent Connection Support for Discovery: 1 00:20:23.770 Transport Requirements: 00:20:23.770 Secure Channel: Not Required 00:20:23.770 Port ID: 0 (0x0000) 00:20:23.770 Controller ID: 65535 (0xffff) 00:20:23.770 Admin Max SQ Size: 128 00:20:23.770 Transport Service Identifier: 4420 00:20:23.770 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:23.770 Transport Address: 10.0.0.2 00:20:23.770 Discovery Log Entry 1 00:20:23.770 ---------------------- 00:20:23.770 Transport Type: 3 (TCP) 00:20:23.770 Address Family: 1 (IPv4) 00:20:23.770 Subsystem Type: 2 (NVM Subsystem) 00:20:23.770 Entry Flags: 00:20:23.770 Duplicate Returned Information: 0 00:20:23.770 Explicit Persistent Connection Support for Discovery: 0 00:20:23.770 Transport Requirements: 00:20:23.770 Secure Channel: Not Required 00:20:23.770 Port ID: 0 (0x0000) 00:20:23.770 Controller ID: 65535 (0xffff) 00:20:23.770 Admin Max SQ Size: 128 00:20:23.770 Transport Service Identifier: 4420 00:20:23.770 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:23.770 Transport Address: 10.0.0.2 [2024-07-15 13:12:45.231062] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:23.770 [2024-07-15 13:12:45.231084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e23c0) on tqpair=0x782540 00:20:23.770 [2024-07-15 13:12:45.231095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.770 [2024-07-15 13:12:45.231104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2540) on tqpair=0x782540 00:20:23.770 [2024-07-15 13:12:45.231112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.770 [2024-07-15 13:12:45.231120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e26c0) on tqpair=0x782540 00:20:23.770 [2024-07-15 13:12:45.231127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.770 [2024-07-15 13:12:45.231135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.231143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.771 [2024-07-15 13:12:45.231160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.231187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.231211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.231362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.231375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.231382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.231404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.231430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.231456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.231609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.231621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.231628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.231643] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:23.771 [2024-07-15 13:12:45.231651] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:23.771 [2024-07-15 13:12:45.231667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.231692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.231713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.231838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.231853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.231860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.231891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.231908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.231919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.231940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.232054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.232066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.232073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.232095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.232121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.232141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.232256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.232272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.232282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.232306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.232332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.232353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.232468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.232483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.232490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.232513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.232539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.232559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.232674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.232686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.232692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.232715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.232741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.232761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.232886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.232901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.232908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.232931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.232947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.232957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.232978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.233094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.233107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.233113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.233141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.233167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.233187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.233300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.233312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.233319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.233341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.233367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.233387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.233503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.233518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.233525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.233548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.771 [2024-07-15 13:12:45.233574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.771 [2024-07-15 13:12:45.233594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.771 [2024-07-15 13:12:45.233713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.771 [2024-07-15 13:12:45.233728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.771 [2024-07-15 13:12:45.233735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.771 [2024-07-15 13:12:45.233758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.771 [2024-07-15 13:12:45.233774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.772 [2024-07-15 13:12:45.233784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.233804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.772 [2024-07-15 13:12:45.233920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.233934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.233941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.233948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.772 [2024-07-15 13:12:45.233968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.233978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.233985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.772 [2024-07-15 13:12:45.233995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.234016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.772 [2024-07-15 13:12:45.234137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.234152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.234158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.772 [2024-07-15 13:12:45.234181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.772 [2024-07-15 13:12:45.234208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.234228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.772 [2024-07-15 13:12:45.234345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.234357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.234364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.772 [2024-07-15 13:12:45.234386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.772 [2024-07-15 13:12:45.234412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.234432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.772 [2024-07-15 13:12:45.234557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.234569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.234576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.772 [2024-07-15 13:12:45.234598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.772 [2024-07-15 13:12:45.234624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.234644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.772 [2024-07-15 13:12:45.234758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.234770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.234777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.772 [2024-07-15 13:12:45.234799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.234819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.772 [2024-07-15 13:12:45.234829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.234849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.772 [2024-07-15 13:12:45.238892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.238908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.238915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.238922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.772 [2024-07-15 13:12:45.238953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.238963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.238970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x782540) 00:20:23.772 [2024-07-15 13:12:45.238981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.239003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7e2840, cid 3, qid 0 00:20:23.772 [2024-07-15 13:12:45.239127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.239139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.239146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.239153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7e2840) on tqpair=0x782540 00:20:23.772 [2024-07-15 13:12:45.239166] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:23.772 00:20:23.772 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:23.772 [2024-07-15 13:12:45.273665] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:23.772 [2024-07-15 13:12:45.273702] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885154 ] 00:20:23.772 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.772 [2024-07-15 13:12:45.306850] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:23.772 [2024-07-15 13:12:45.306939] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.772 [2024-07-15 13:12:45.306951] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.772 [2024-07-15 13:12:45.306964] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.772 [2024-07-15 13:12:45.306973] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.772 [2024-07-15 13:12:45.310920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:23.772 [2024-07-15 13:12:45.310958] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x768540 0 00:20:23.772 [2024-07-15 13:12:45.318893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.772 [2024-07-15 13:12:45.318912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.772 [2024-07-15 13:12:45.318923] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.772 [2024-07-15 13:12:45.318929] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.772 [2024-07-15 13:12:45.318982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.318994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.319001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.772 [2024-07-15 13:12:45.319015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.772 [2024-07-15 13:12:45.319041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.772 [2024-07-15 13:12:45.326906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.326923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.326930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.326937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.772 [2024-07-15 13:12:45.326953] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.772 [2024-07-15 13:12:45.326980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:23.772 [2024-07-15 13:12:45.326990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:23.772 [2024-07-15 13:12:45.327007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.327016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.327023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.772 [2024-07-15 13:12:45.327034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.327057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.772 [2024-07-15 13:12:45.327234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.327247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.327253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.327260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.772 [2024-07-15 13:12:45.327268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:23.772 [2024-07-15 13:12:45.327281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:23.772 [2024-07-15 13:12:45.327293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.327300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.327306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.772 [2024-07-15 13:12:45.327317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.772 [2024-07-15 13:12:45.327338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.772 [2024-07-15 13:12:45.327455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.772 [2024-07-15 13:12:45.327468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.772 [2024-07-15 13:12:45.327474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.772 [2024-07-15 13:12:45.327481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.772 [2024-07-15 13:12:45.327489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:23.773 [2024-07-15 13:12:45.327503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.773 [2024-07-15 13:12:45.327519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.327527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.327533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.327544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.773 [2024-07-15 13:12:45.327564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.773 [2024-07-15 13:12:45.327733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.773 [2024-07-15 13:12:45.327745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.773 [2024-07-15 13:12:45.327752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.327759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.773 [2024-07-15 13:12:45.327767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.773 [2024-07-15 13:12:45.327784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.327792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.327799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.327809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.773 [2024-07-15 13:12:45.327829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.773 [2024-07-15 13:12:45.328002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.773 [2024-07-15 13:12:45.328018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.773 [2024-07-15 13:12:45.328025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.773 [2024-07-15 13:12:45.328039] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.773 [2024-07-15 13:12:45.328047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.773 [2024-07-15 13:12:45.328061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.773 [2024-07-15 13:12:45.328170] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:23.773 [2024-07-15 13:12:45.328177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.773 [2024-07-15 13:12:45.328204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.328229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.773 [2024-07-15 13:12:45.328250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.773 [2024-07-15 13:12:45.328435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.773 [2024-07-15 13:12:45.328448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.773 [2024-07-15 13:12:45.328454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.773 [2024-07-15 13:12:45.328469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.773 [2024-07-15 13:12:45.328489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.328516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.773 [2024-07-15 13:12:45.328536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.773 [2024-07-15 13:12:45.328661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.773 [2024-07-15 13:12:45.328676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.773 [2024-07-15 13:12:45.328683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.773 [2024-07-15 13:12:45.328697] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.773 [2024-07-15 13:12:45.328705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.773 [2024-07-15 13:12:45.328719] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:23.773 [2024-07-15 13:12:45.328733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.773 [2024-07-15 13:12:45.328747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.328755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.328766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.773 [2024-07-15 13:12:45.328787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.773 [2024-07-15 13:12:45.329018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.773 [2024-07-15 13:12:45.329033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.773 [2024-07-15 13:12:45.329040] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329046] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=4096, cccid=0 00:20:23.773 [2024-07-15 13:12:45.329054] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c83c0) on tqpair(0x768540): expected_datao=0, payload_size=4096 00:20:23.773 [2024-07-15 13:12:45.329061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329072] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329079] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.773 [2024-07-15 13:12:45.329146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.773 [2024-07-15 13:12:45.329153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.773 [2024-07-15 13:12:45.329177] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:23.773 [2024-07-15 13:12:45.329190] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:23.773 [2024-07-15 13:12:45.329198] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:23.773 [2024-07-15 13:12:45.329205] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:23.773 [2024-07-15 13:12:45.329216] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:23.773 [2024-07-15 13:12:45.329224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:23.773 [2024-07-15 13:12:45.329238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.773 [2024-07-15 13:12:45.329250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.329275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.773 [2024-07-15 13:12:45.329311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.773 [2024-07-15 13:12:45.329530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.773 [2024-07-15 13:12:45.329543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.773 [2024-07-15 13:12:45.329550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.773 [2024-07-15 13:12:45.329567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.329590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.773 [2024-07-15 13:12:45.329600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.329622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.773 [2024-07-15 13:12:45.329631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.329653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.773 [2024-07-15 13:12:45.329662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.329699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.773 [2024-07-15 13:12:45.329707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.773 [2024-07-15 13:12:45.329726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.773 [2024-07-15 13:12:45.329738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.773 [2024-07-15 13:12:45.329745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x768540) 00:20:23.773 [2024-07-15 13:12:45.329755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.773 [2024-07-15 13:12:45.329779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c83c0, cid 0, qid 0 00:20:23.773 [2024-07-15 13:12:45.329807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8540, cid 1, qid 0 00:20:23.773 [2024-07-15 13:12:45.329815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c86c0, cid 2, qid 0 00:20:23.773 [2024-07-15 13:12:45.329823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.773 [2024-07-15 13:12:45.329831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c89c0, cid 4, qid 0 00:20:23.773 [2024-07-15 13:12:45.330046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.773 [2024-07-15 13:12:45.330061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.774 [2024-07-15 13:12:45.330068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c89c0) on tqpair=0x768540 00:20:23.774 [2024-07-15 13:12:45.330082] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:23.774 [2024-07-15 13:12:45.330091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.330105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.330116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.330126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x768540) 00:20:23.774 [2024-07-15 13:12:45.330150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.774 [2024-07-15 13:12:45.330170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c89c0, cid 4, qid 0 00:20:23.774 [2024-07-15 13:12:45.330349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.774 [2024-07-15 13:12:45.330371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.774 [2024-07-15 13:12:45.330378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c89c0) on tqpair=0x768540 00:20:23.774 [2024-07-15 13:12:45.330448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.330467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.330496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x768540) 00:20:23.774 [2024-07-15 13:12:45.330515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.774 [2024-07-15 13:12:45.330535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c89c0, cid 4, qid 0 00:20:23.774 [2024-07-15 13:12:45.330708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.774 [2024-07-15 13:12:45.330724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.774 [2024-07-15 13:12:45.330730] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330737] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=4096, cccid=4 00:20:23.774 [2024-07-15 13:12:45.330744] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c89c0) on tqpair(0x768540): expected_datao=0, payload_size=4096 00:20:23.774 [2024-07-15 13:12:45.330755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330773] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.330782] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.334902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.774 [2024-07-15 13:12:45.334919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.774 [2024-07-15 13:12:45.334925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.334932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c89c0) on tqpair=0x768540 00:20:23.774 [2024-07-15 13:12:45.334947] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:23.774 [2024-07-15 13:12:45.334968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.334986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.335015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x768540) 00:20:23.774 [2024-07-15 13:12:45.335034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.774 [2024-07-15 13:12:45.335057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c89c0, cid 4, qid 0 00:20:23.774 [2024-07-15 13:12:45.335249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.774 [2024-07-15 13:12:45.335265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.774 [2024-07-15 13:12:45.335272] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335278] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=4096, cccid=4 00:20:23.774 [2024-07-15 13:12:45.335286] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c89c0) on tqpair(0x768540): expected_datao=0, payload_size=4096 00:20:23.774 [2024-07-15 13:12:45.335293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335336] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335345] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.774 [2024-07-15 13:12:45.335475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.774 [2024-07-15 13:12:45.335481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c89c0) on tqpair=0x768540 00:20:23.774 [2024-07-15 13:12:45.335508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.335526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.335541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x768540) 00:20:23.774 [2024-07-15 13:12:45.335559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.774 [2024-07-15 13:12:45.335581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c89c0, cid 4, qid 0 00:20:23.774 [2024-07-15 13:12:45.335714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.774 [2024-07-15 13:12:45.335730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.774 [2024-07-15 13:12:45.335740] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335747] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=4096, cccid=4 00:20:23.774 [2024-07-15 13:12:45.335755] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c89c0) on tqpair(0x768540): expected_datao=0, payload_size=4096 00:20:23.774 [2024-07-15 13:12:45.335762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335800] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335809] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.774 [2024-07-15 13:12:45.335952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.774 [2024-07-15 13:12:45.335958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.774 [2024-07-15 13:12:45.335965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c89c0) on tqpair=0x768540 00:20:23.774 [2024-07-15 13:12:45.335978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:23.774 [2024-07-15 13:12:45.335993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:23.775 [2024-07-15 13:12:45.336010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:23.775 [2024-07-15 13:12:45.336021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:23.775 [2024-07-15 13:12:45.336030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:23.775 [2024-07-15 13:12:45.336038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:23.775 [2024-07-15 13:12:45.336047] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:23.775 [2024-07-15 13:12:45.336055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:23.775 [2024-07-15 13:12:45.336063] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:23.775 [2024-07-15 13:12:45.336082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.336102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.336113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.336135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.775 [2024-07-15 13:12:45.336161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c89c0, cid 4, qid 0 00:20:23.775 [2024-07-15 13:12:45.336173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8b40, cid 5, qid 0 00:20:23.775 [2024-07-15 13:12:45.336336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.336351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.336358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c89c0) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.336375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.336388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.336395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8b40) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.336417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.336437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.336457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8b40, cid 5, qid 0 00:20:23.775 [2024-07-15 13:12:45.336631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.336648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.336655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8b40) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.336679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.336699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.336720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8b40, cid 5, qid 0 00:20:23.775 [2024-07-15 13:12:45.336900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.336915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.336922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8b40) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.336945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.336954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.336965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.336985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8b40, cid 5, qid 0 00:20:23.775 [2024-07-15 13:12:45.337153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.337169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.337175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8b40) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.337206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.337228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.337243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.337260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.337271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.337292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.337319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337327] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x768540) 00:20:23.775 [2024-07-15 13:12:45.337336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.775 [2024-07-15 13:12:45.337357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8b40, cid 5, qid 0 00:20:23.775 [2024-07-15 13:12:45.337384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c89c0, cid 4, qid 0 00:20:23.775 [2024-07-15 13:12:45.337392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8cc0, cid 6, qid 0 00:20:23.775 [2024-07-15 13:12:45.337399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8e40, cid 7, qid 0 00:20:23.775 [2024-07-15 13:12:45.337695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.775 [2024-07-15 13:12:45.337711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.775 [2024-07-15 13:12:45.337719] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337725] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=8192, cccid=5 00:20:23.775 [2024-07-15 13:12:45.337733] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c8b40) on tqpair(0x768540): expected_datao=0, payload_size=8192 00:20:23.775 [2024-07-15 13:12:45.337740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337750] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337758] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.775 [2024-07-15 13:12:45.337775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.775 [2024-07-15 13:12:45.337781] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337787] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=512, cccid=4 00:20:23.775 [2024-07-15 13:12:45.337795] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c89c0) on tqpair(0x768540): expected_datao=0, payload_size=512 00:20:23.775 [2024-07-15 13:12:45.337802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337811] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337818] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.775 [2024-07-15 13:12:45.337835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.775 [2024-07-15 13:12:45.337842] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=512, cccid=6 00:20:23.775 [2024-07-15 13:12:45.337855] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c8cc0) on tqpair(0x768540): expected_datao=0, payload_size=512 00:20:23.775 [2024-07-15 13:12:45.337862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337871] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337887] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.775 [2024-07-15 13:12:45.337906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.775 [2024-07-15 13:12:45.337912] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337918] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x768540): datao=0, datal=4096, cccid=7 00:20:23.775 [2024-07-15 13:12:45.337930] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c8e40) on tqpair(0x768540): expected_datao=0, payload_size=4096 00:20:23.775 [2024-07-15 13:12:45.337938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337947] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337955] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.337976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.337982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.337988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8b40) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.338007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.338019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.338025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.338032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c89c0) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.338047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.338058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.338064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.775 [2024-07-15 13:12:45.338070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8cc0) on tqpair=0x768540 00:20:23.775 [2024-07-15 13:12:45.338081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.775 [2024-07-15 13:12:45.338090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.775 [2024-07-15 13:12:45.338097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.776 [2024-07-15 13:12:45.338103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8e40) on tqpair=0x768540 00:20:23.776 ===================================================== 00:20:23.776 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.776 ===================================================== 00:20:23.776 Controller Capabilities/Features 00:20:23.776 ================================ 00:20:23.776 Vendor ID: 8086 00:20:23.776 Subsystem Vendor ID: 8086 00:20:23.776 Serial Number: SPDK00000000000001 00:20:23.776 Model Number: SPDK bdev Controller 00:20:23.776 Firmware Version: 24.09 00:20:23.776 Recommended Arb Burst: 6 00:20:23.776 IEEE OUI Identifier: e4 d2 5c 00:20:23.776 Multi-path I/O 00:20:23.776 May have multiple subsystem ports: Yes 00:20:23.776 May have multiple controllers: Yes 00:20:23.776 Associated with SR-IOV VF: No 00:20:23.776 Max Data Transfer Size: 131072 00:20:23.776 Max Number of Namespaces: 32 00:20:23.776 Max Number of I/O Queues: 127 00:20:23.776 NVMe Specification Version (VS): 1.3 00:20:23.776 NVMe Specification Version (Identify): 1.3 00:20:23.776 Maximum Queue Entries: 128 00:20:23.776 Contiguous Queues Required: Yes 00:20:23.776 Arbitration Mechanisms Supported 00:20:23.776 Weighted Round Robin: Not Supported 00:20:23.776 Vendor Specific: Not Supported 00:20:23.776 Reset Timeout: 15000 ms 00:20:23.776 Doorbell Stride: 4 bytes 00:20:23.776 NVM Subsystem Reset: Not Supported 00:20:23.776 Command Sets Supported 00:20:23.776 NVM Command Set: Supported 00:20:23.776 Boot Partition: Not Supported 00:20:23.776 Memory Page Size Minimum: 4096 bytes 00:20:23.776 Memory Page Size Maximum: 4096 bytes 00:20:23.776 Persistent Memory Region: Not Supported 00:20:23.776 Optional Asynchronous Events Supported 00:20:23.776 Namespace Attribute Notices: Supported 00:20:23.776 Firmware Activation Notices: Not Supported 00:20:23.776 ANA Change Notices: Not Supported 00:20:23.776 PLE Aggregate Log Change Notices: Not Supported 00:20:23.776 LBA Status Info Alert Notices: Not Supported 00:20:23.776 EGE Aggregate Log Change Notices: Not Supported 00:20:23.776 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.776 Zone Descriptor Change Notices: Not Supported 00:20:23.776 Discovery Log Change Notices: Not Supported 00:20:23.776 Controller Attributes 00:20:23.776 128-bit Host Identifier: Supported 00:20:23.776 Non-Operational Permissive Mode: Not Supported 00:20:23.776 NVM Sets: Not Supported 00:20:23.776 Read Recovery Levels: Not Supported 00:20:23.776 Endurance Groups: Not Supported 00:20:23.776 Predictable Latency Mode: Not Supported 00:20:23.776 Traffic Based Keep ALive: Not Supported 00:20:23.776 Namespace Granularity: Not Supported 00:20:23.776 SQ Associations: Not Supported 00:20:23.776 UUID List: Not Supported 00:20:23.776 Multi-Domain Subsystem: Not Supported 00:20:23.776 Fixed Capacity Management: Not Supported 00:20:23.776 Variable Capacity Management: Not Supported 00:20:23.776 Delete Endurance Group: Not Supported 00:20:23.776 Delete NVM Set: Not Supported 00:20:23.776 Extended LBA Formats Supported: Not Supported 00:20:23.776 Flexible Data Placement Supported: Not Supported 00:20:23.776 00:20:23.776 Controller Memory Buffer Support 00:20:23.776 ================================ 00:20:23.776 Supported: No 00:20:23.776 00:20:23.776 Persistent Memory Region Support 00:20:23.776 ================================ 00:20:23.776 Supported: No 00:20:23.776 00:20:23.776 Admin Command Set Attributes 00:20:23.776 ============================ 00:20:23.776 Security Send/Receive: Not Supported 00:20:23.776 Format NVM: Not Supported 00:20:23.776 Firmware Activate/Download: Not Supported 00:20:23.776 Namespace Management: Not Supported 00:20:23.776 Device Self-Test: Not Supported 00:20:23.776 Directives: Not Supported 00:20:23.776 NVMe-MI: Not Supported 00:20:23.776 Virtualization Management: Not Supported 00:20:23.776 Doorbell Buffer Config: Not Supported 00:20:23.776 Get LBA Status Capability: Not Supported 00:20:23.776 Command & Feature Lockdown Capability: Not Supported 00:20:23.776 Abort Command Limit: 4 00:20:23.776 Async Event Request Limit: 4 00:20:23.776 Number of Firmware Slots: N/A 00:20:23.776 Firmware Slot 1 Read-Only: N/A 00:20:23.776 Firmware Activation Without Reset: N/A 00:20:23.776 Multiple Update Detection Support: N/A 00:20:23.776 Firmware Update Granularity: No Information Provided 00:20:23.776 Per-Namespace SMART Log: No 00:20:23.776 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.776 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:23.776 Command Effects Log Page: Supported 00:20:23.776 Get Log Page Extended Data: Supported 00:20:23.776 Telemetry Log Pages: Not Supported 00:20:23.776 Persistent Event Log Pages: Not Supported 00:20:23.776 Supported Log Pages Log Page: May Support 00:20:23.776 Commands Supported & Effects Log Page: Not Supported 00:20:23.776 Feature Identifiers & Effects Log Page:May Support 00:20:23.776 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.776 Data Area 4 for Telemetry Log: Not Supported 00:20:23.776 Error Log Page Entries Supported: 128 00:20:23.776 Keep Alive: Supported 00:20:23.776 Keep Alive Granularity: 10000 ms 00:20:23.776 00:20:23.776 NVM Command Set Attributes 00:20:23.776 ========================== 00:20:23.776 Submission Queue Entry Size 00:20:23.776 Max: 64 00:20:23.776 Min: 64 00:20:23.776 Completion Queue Entry Size 00:20:23.776 Max: 16 00:20:23.776 Min: 16 00:20:23.776 Number of Namespaces: 32 00:20:23.776 Compare Command: Supported 00:20:23.776 Write Uncorrectable Command: Not Supported 00:20:23.776 Dataset Management Command: Supported 00:20:23.776 Write Zeroes Command: Supported 00:20:23.776 Set Features Save Field: Not Supported 00:20:23.776 Reservations: Supported 00:20:23.776 Timestamp: Not Supported 00:20:23.776 Copy: Supported 00:20:23.776 Volatile Write Cache: Present 00:20:23.776 Atomic Write Unit (Normal): 1 00:20:23.776 Atomic Write Unit (PFail): 1 00:20:23.776 Atomic Compare & Write Unit: 1 00:20:23.776 Fused Compare & Write: Supported 00:20:23.776 Scatter-Gather List 00:20:23.776 SGL Command Set: Supported 00:20:23.776 SGL Keyed: Supported 00:20:23.776 SGL Bit Bucket Descriptor: Not Supported 00:20:23.776 SGL Metadata Pointer: Not Supported 00:20:23.776 Oversized SGL: Not Supported 00:20:23.776 SGL Metadata Address: Not Supported 00:20:23.776 SGL Offset: Supported 00:20:23.776 Transport SGL Data Block: Not Supported 00:20:23.776 Replay Protected Memory Block: Not Supported 00:20:23.776 00:20:23.776 Firmware Slot Information 00:20:23.776 ========================= 00:20:23.776 Active slot: 1 00:20:23.776 Slot 1 Firmware Revision: 24.09 00:20:23.776 00:20:23.776 00:20:23.776 Commands Supported and Effects 00:20:23.776 ============================== 00:20:23.776 Admin Commands 00:20:23.776 -------------- 00:20:23.776 Get Log Page (02h): Supported 00:20:23.776 Identify (06h): Supported 00:20:23.776 Abort (08h): Supported 00:20:23.776 Set Features (09h): Supported 00:20:23.776 Get Features (0Ah): Supported 00:20:23.776 Asynchronous Event Request (0Ch): Supported 00:20:23.776 Keep Alive (18h): Supported 00:20:23.776 I/O Commands 00:20:23.776 ------------ 00:20:23.776 Flush (00h): Supported LBA-Change 00:20:23.776 Write (01h): Supported LBA-Change 00:20:23.776 Read (02h): Supported 00:20:23.776 Compare (05h): Supported 00:20:23.776 Write Zeroes (08h): Supported LBA-Change 00:20:23.776 Dataset Management (09h): Supported LBA-Change 00:20:23.776 Copy (19h): Supported LBA-Change 00:20:23.776 00:20:23.776 Error Log 00:20:23.776 ========= 00:20:23.776 00:20:23.776 Arbitration 00:20:23.776 =========== 00:20:23.776 Arbitration Burst: 1 00:20:23.776 00:20:23.776 Power Management 00:20:23.776 ================ 00:20:23.776 Number of Power States: 1 00:20:23.776 Current Power State: Power State #0 00:20:23.776 Power State #0: 00:20:23.776 Max Power: 0.00 W 00:20:23.776 Non-Operational State: Operational 00:20:23.776 Entry Latency: Not Reported 00:20:23.776 Exit Latency: Not Reported 00:20:23.776 Relative Read Throughput: 0 00:20:23.776 Relative Read Latency: 0 00:20:23.776 Relative Write Throughput: 0 00:20:23.776 Relative Write Latency: 0 00:20:23.776 Idle Power: Not Reported 00:20:23.776 Active Power: Not Reported 00:20:23.776 Non-Operational Permissive Mode: Not Supported 00:20:23.776 00:20:23.776 Health Information 00:20:23.776 ================== 00:20:23.776 Critical Warnings: 00:20:23.776 Available Spare Space: OK 00:20:23.776 Temperature: OK 00:20:23.776 Device Reliability: OK 00:20:23.776 Read Only: No 00:20:23.776 Volatile Memory Backup: OK 00:20:23.776 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:23.776 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:23.776 Available Spare: 0% 00:20:23.776 Available Spare Threshold: 0% 00:20:23.776 Life Percentage Used:[2024-07-15 13:12:45.338254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.776 [2024-07-15 13:12:45.338265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x768540) 00:20:23.776 [2024-07-15 13:12:45.338276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.776 [2024-07-15 13:12:45.338297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8e40, cid 7, qid 0 00:20:23.776 [2024-07-15 13:12:45.338484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.338500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.338507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.338514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8e40) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.338561] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:23.777 [2024-07-15 13:12:45.338581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c83c0) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.338591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.777 [2024-07-15 13:12:45.338600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8540) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.338607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.777 [2024-07-15 13:12:45.338631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c86c0) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.338638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.777 [2024-07-15 13:12:45.338646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.338656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.777 [2024-07-15 13:12:45.338669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.338676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.338683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.338693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.338714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.338868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.342900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.342909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.342915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.342926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.342934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.342940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.342951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.342993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.343175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.343187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.343193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.343208] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:23.777 [2024-07-15 13:12:45.343216] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:23.777 [2024-07-15 13:12:45.343232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.343257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.343289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.343456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.343469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.343475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.343498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.343524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.343544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.343681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.343697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.343704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.343728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.343754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.343775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.343901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.343917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.343924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.343947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.343963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.343973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.343994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.344110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.344122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.344129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.344151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.344177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.344197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.344369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.344384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.344391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.344414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.344440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.344460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.344627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.344640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.344650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.344673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.344700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.344720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.344835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.344850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.344857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.344889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.344906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.344917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.344937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.345107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.345122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.345128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.345135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.345152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.345161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.345167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.777 [2024-07-15 13:12:45.345178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.777 [2024-07-15 13:12:45.345198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.777 [2024-07-15 13:12:45.345367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.777 [2024-07-15 13:12:45.345382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.777 [2024-07-15 13:12:45.345389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.777 [2024-07-15 13:12:45.345395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.777 [2024-07-15 13:12:45.345412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.778 [2024-07-15 13:12:45.345438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.778 [2024-07-15 13:12:45.345458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.778 [2024-07-15 13:12:45.345576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.778 [2024-07-15 13:12:45.345591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.778 [2024-07-15 13:12:45.345598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.778 [2024-07-15 13:12:45.345626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.778 [2024-07-15 13:12:45.345652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.778 [2024-07-15 13:12:45.345673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.778 [2024-07-15 13:12:45.345792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.778 [2024-07-15 13:12:45.345807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.778 [2024-07-15 13:12:45.345814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.778 [2024-07-15 13:12:45.345837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.345853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.778 [2024-07-15 13:12:45.345863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.778 [2024-07-15 13:12:45.345891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.778 [2024-07-15 13:12:45.346064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.778 [2024-07-15 13:12:45.346076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.778 [2024-07-15 13:12:45.346083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.778 [2024-07-15 13:12:45.346105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.778 [2024-07-15 13:12:45.346131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.778 [2024-07-15 13:12:45.346151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.778 [2024-07-15 13:12:45.346321] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.778 [2024-07-15 13:12:45.346336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.778 [2024-07-15 13:12:45.346343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.778 [2024-07-15 13:12:45.346367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.778 [2024-07-15 13:12:45.346393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.778 [2024-07-15 13:12:45.346413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.778 [2024-07-15 13:12:45.346585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.778 [2024-07-15 13:12:45.346600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.778 [2024-07-15 13:12:45.346607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.778 [2024-07-15 13:12:45.346633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.778 [2024-07-15 13:12:45.346660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.778 [2024-07-15 13:12:45.346681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.778 [2024-07-15 13:12:45.346801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.778 [2024-07-15 13:12:45.346816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.778 [2024-07-15 13:12:45.346822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.778 [2024-07-15 13:12:45.346845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.346861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x768540) 00:20:23.778 [2024-07-15 13:12:45.346871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.778 [2024-07-15 13:12:45.350915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c8840, cid 3, qid 0 00:20:23.778 [2024-07-15 13:12:45.351108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.778 [2024-07-15 13:12:45.351121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.778 [2024-07-15 13:12:45.351127] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.778 [2024-07-15 13:12:45.351134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c8840) on tqpair=0x768540 00:20:23.778 [2024-07-15 13:12:45.351147] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:23.778 0% 00:20:23.778 Data Units Read: 0 00:20:23.778 Data Units Written: 0 00:20:23.778 Host Read Commands: 0 00:20:23.778 Host Write Commands: 0 00:20:23.778 Controller Busy Time: 0 minutes 00:20:23.778 Power Cycles: 0 00:20:23.778 Power On Hours: 0 hours 00:20:23.778 Unsafe Shutdowns: 0 00:20:23.778 Unrecoverable Media Errors: 0 00:20:23.778 Lifetime Error Log Entries: 0 00:20:23.778 Warning Temperature Time: 0 minutes 00:20:23.778 Critical Temperature Time: 0 minutes 00:20:23.778 00:20:23.778 Number of Queues 00:20:23.778 ================ 00:20:23.778 Number of I/O Submission Queues: 127 00:20:23.778 Number of I/O Completion Queues: 127 00:20:23.778 00:20:23.778 Active Namespaces 00:20:23.778 ================= 00:20:23.778 Namespace ID:1 00:20:23.778 Error Recovery Timeout: Unlimited 00:20:23.778 Command Set Identifier: NVM (00h) 00:20:23.778 Deallocate: Supported 00:20:23.778 Deallocated/Unwritten Error: Not Supported 00:20:23.778 Deallocated Read Value: Unknown 00:20:23.778 Deallocate in Write Zeroes: Not Supported 00:20:23.778 Deallocated Guard Field: 0xFFFF 00:20:23.778 Flush: Supported 00:20:23.778 Reservation: Supported 00:20:23.778 Namespace Sharing Capabilities: Multiple Controllers 00:20:23.778 Size (in LBAs): 131072 (0GiB) 00:20:23.778 Capacity (in LBAs): 131072 (0GiB) 00:20:23.778 Utilization (in LBAs): 131072 (0GiB) 00:20:23.778 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:23.778 EUI64: ABCDEF0123456789 00:20:23.778 UUID: aab005a3-1915-4d7b-8a26-8b6e308bc788 00:20:23.778 Thin Provisioning: Not Supported 00:20:23.778 Per-NS Atomic Units: Yes 00:20:23.778 Atomic Boundary Size (Normal): 0 00:20:23.778 Atomic Boundary Size (PFail): 0 00:20:23.778 Atomic Boundary Offset: 0 00:20:23.778 Maximum Single Source Range Length: 65535 00:20:23.778 Maximum Copy Length: 65535 00:20:23.778 Maximum Source Range Count: 1 00:20:23.778 NGUID/EUI64 Never Reused: No 00:20:23.778 Namespace Write Protected: No 00:20:23.778 Number of LBA Formats: 1 00:20:23.778 Current LBA Format: LBA Format #00 00:20:23.778 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:23.778 00:20:23.778 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:23.778 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.778 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.778 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.779 rmmod nvme_tcp 00:20:23.779 rmmod nvme_fabrics 00:20:23.779 rmmod nvme_keyring 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3884992 ']' 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3884992 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3884992 ']' 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3884992 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3884992 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3884992' 00:20:23.779 killing process with pid 3884992 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3884992 00:20:23.779 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3884992 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.037 13:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.573 13:12:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.573 00:20:26.573 real 0m5.963s 00:20:26.573 user 0m7.094s 00:20:26.573 sys 0m1.805s 00:20:26.573 13:12:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.573 13:12:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.573 ************************************ 00:20:26.573 END TEST nvmf_identify 00:20:26.573 ************************************ 00:20:26.573 13:12:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:26.573 13:12:47 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:26.573 13:12:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.573 13:12:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.573 13:12:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.573 ************************************ 00:20:26.573 START TEST nvmf_perf 00:20:26.573 ************************************ 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:26.573 * Looking for test storage... 00:20:26.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.573 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.574 13:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.477 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:28.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:28.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:28.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:28.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:20:28.478 00:20:28.478 --- 10.0.0.2 ping statistics --- 00:20:28.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.478 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:20:28.478 00:20:28.478 --- 10.0.0.1 ping statistics --- 00:20:28.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.478 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3887141 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3887141 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3887141 ']' 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.478 13:12:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.478 [2024-07-15 13:12:49.939941] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:28.478 [2024-07-15 13:12:49.940042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.478 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.478 [2024-07-15 13:12:50.010343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.478 [2024-07-15 13:12:50.127855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.479 [2024-07-15 13:12:50.127923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.479 [2024-07-15 13:12:50.127937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.479 [2024-07-15 13:12:50.127948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.479 [2024-07-15 13:12:50.127958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.479 [2024-07-15 13:12:50.128017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.479 [2024-07-15 13:12:50.128078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.479 [2024-07-15 13:12:50.128144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.479 [2024-07-15 13:12:50.128147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:28.737 13:12:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:32.021 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:32.021 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:32.021 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:32.021 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:32.277 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:32.277 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:32.277 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:32.277 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:32.277 13:12:53 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.534 [2024-07-15 13:12:54.133819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.534 13:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.791 13:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:32.791 13:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:33.048 13:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:33.048 13:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:33.305 13:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.563 [2024-07-15 13:12:55.125486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.563 13:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.822 13:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:33.823 13:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:33.823 13:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:33.823 13:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:35.199 Initializing NVMe Controllers 00:20:35.199 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:35.199 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:35.199 Initialization complete. Launching workers. 00:20:35.199 ======================================================== 00:20:35.199 Latency(us) 00:20:35.199 Device Information : IOPS MiB/s Average min max 00:20:35.199 PCIE (0000:88:00.0) NSID 1 from core 0: 85597.76 334.37 373.41 43.42 4340.59 00:20:35.199 ======================================================== 00:20:35.199 Total : 85597.76 334.37 373.41 43.42 4340.59 00:20:35.199 00:20:35.199 13:12:56 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.199 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.576 Initializing NVMe Controllers 00:20:36.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.577 Initialization complete. Launching workers. 00:20:36.577 ======================================================== 00:20:36.577 Latency(us) 00:20:36.577 Device Information : IOPS MiB/s Average min max 00:20:36.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.66 0.38 10603.09 219.37 45872.65 00:20:36.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.78 0.24 16313.64 7958.98 47897.26 00:20:36.577 ======================================================== 00:20:36.577 Total : 158.44 0.62 12829.85 219.37 47897.26 00:20:36.577 00:20:36.577 13:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.577 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.952 Initializing NVMe Controllers 00:20:37.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:37.952 Initialization complete. Launching workers. 00:20:37.952 ======================================================== 00:20:37.952 Latency(us) 00:20:37.952 Device Information : IOPS MiB/s Average min max 00:20:37.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8569.00 33.47 3742.62 413.81 7504.68 00:20:37.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3908.00 15.27 8223.96 6155.80 15782.87 00:20:37.952 ======================================================== 00:20:37.952 Total : 12477.00 48.74 5146.25 413.81 15782.87 00:20:37.952 00:20:37.952 13:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:37.952 13:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:37.952 13:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.952 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.512 Initializing NVMe Controllers 00:20:40.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.512 Controller IO queue size 128, less than required. 00:20:40.512 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.512 Controller IO queue size 128, less than required. 00:20:40.512 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.512 Initialization complete. Launching workers. 00:20:40.512 ======================================================== 00:20:40.512 Latency(us) 00:20:40.512 Device Information : IOPS MiB/s Average min max 00:20:40.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1121.98 280.50 117666.29 64745.36 215310.03 00:20:40.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.99 144.25 229211.76 71007.81 346930.74 00:20:40.512 ======================================================== 00:20:40.512 Total : 1698.97 424.74 155548.42 64745.36 346930.74 00:20:40.512 00:20:40.512 13:13:02 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:40.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.771 No valid NVMe controllers or AIO or URING devices found 00:20:40.771 Initializing NVMe Controllers 00:20:40.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.771 Controller IO queue size 128, less than required. 00:20:40.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.771 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:40.771 Controller IO queue size 128, less than required. 00:20:40.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.771 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:40.771 WARNING: Some requested NVMe devices were skipped 00:20:40.771 13:13:02 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:40.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.298 Initializing NVMe Controllers 00:20:43.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.298 Controller IO queue size 128, less than required. 00:20:43.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:43.298 Controller IO queue size 128, less than required. 00:20:43.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:43.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:43.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:43.299 Initialization complete. Launching workers. 00:20:43.299 00:20:43.299 ==================== 00:20:43.299 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:43.299 TCP transport: 00:20:43.299 polls: 17557 00:20:43.299 idle_polls: 4022 00:20:43.299 sock_completions: 13535 00:20:43.299 nvme_completions: 4491 00:20:43.299 submitted_requests: 6684 00:20:43.299 queued_requests: 1 00:20:43.299 00:20:43.299 ==================== 00:20:43.299 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:43.299 TCP transport: 00:20:43.299 polls: 22948 00:20:43.299 idle_polls: 9820 00:20:43.299 sock_completions: 13128 00:20:43.299 nvme_completions: 4703 00:20:43.299 submitted_requests: 7116 00:20:43.299 queued_requests: 1 00:20:43.299 ======================================================== 00:20:43.299 Latency(us) 00:20:43.299 Device Information : IOPS MiB/s Average min max 00:20:43.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1121.29 280.32 117562.36 72074.64 206416.62 00:20:43.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1174.23 293.56 110783.37 47458.30 161219.40 00:20:43.299 ======================================================== 00:20:43.299 Total : 2295.52 573.88 114094.69 47458.30 206416.62 00:20:43.299 00:20:43.299 13:13:04 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:43.299 13:13:04 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.557 rmmod nvme_tcp 00:20:43.557 rmmod nvme_fabrics 00:20:43.557 rmmod nvme_keyring 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3887141 ']' 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3887141 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3887141 ']' 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3887141 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3887141 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3887141' 00:20:43.557 killing process with pid 3887141 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3887141 00:20:43.557 13:13:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3887141 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.456 13:13:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.360 13:13:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:47.361 00:20:47.361 real 0m21.081s 00:20:47.361 user 1m5.800s 00:20:47.361 sys 0m4.927s 00:20:47.361 13:13:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:47.361 13:13:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.361 ************************************ 00:20:47.361 END TEST nvmf_perf 00:20:47.361 ************************************ 00:20:47.361 13:13:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:47.361 13:13:08 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:47.361 13:13:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:47.361 13:13:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:47.361 13:13:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:47.361 ************************************ 00:20:47.361 START TEST nvmf_fio_host 00:20:47.361 ************************************ 00:20:47.361 13:13:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:47.361 * Looking for test storage... 00:20:47.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:47.361 13:13:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:47.362 13:13:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:49.267 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:49.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:49.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:49.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.267 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.526 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.526 13:13:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:20:49.526 00:20:49.526 --- 10.0.0.2 ping statistics --- 00:20:49.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.526 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:20:49.526 00:20:49.526 --- 10.0.0.1 ping statistics --- 00:20:49.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.526 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3891032 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3891032 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3891032 ']' 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.526 13:13:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.526 [2024-07-15 13:13:11.103263] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:20:49.526 [2024-07-15 13:13:11.103347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.526 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.526 [2024-07-15 13:13:11.171249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.784 [2024-07-15 13:13:11.288545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.784 [2024-07-15 13:13:11.288608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.784 [2024-07-15 13:13:11.288625] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.784 [2024-07-15 13:13:11.288638] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.784 [2024-07-15 13:13:11.288650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.784 [2024-07-15 13:13:11.288736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.785 [2024-07-15 13:13:11.288807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.785 [2024-07-15 13:13:11.288901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.785 [2024-07-15 13:13:11.288905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.350 13:13:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.350 13:13:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:50.350 13:13:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:50.608 [2024-07-15 13:13:12.256279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.608 13:13:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:50.608 13:13:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.608 13:13:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.608 13:13:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:50.867 Malloc1 00:20:50.867 13:13:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.125 13:13:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:51.384 13:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.648 [2024-07-15 13:13:13.287617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.648 13:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:51.908 13:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.166 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:52.166 fio-3.35 00:20:52.166 Starting 1 thread 00:20:52.166 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.696 00:20:54.696 test: (groupid=0, jobs=1): err= 0: pid=3891514: Mon Jul 15 13:13:16 2024 00:20:54.696 read: IOPS=9049, BW=35.3MiB/s (37.1MB/s)(70.9MiB/2006msec) 00:20:54.696 slat (usec): min=2, max=165, avg= 2.66, stdev= 1.94 00:20:54.696 clat (usec): min=3297, max=13602, avg=7804.16, stdev=566.66 00:20:54.696 lat (usec): min=3328, max=13612, avg=7806.83, stdev=566.55 00:20:54.696 clat percentiles (usec): 00:20:54.696 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:20:54.696 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:20:54.696 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:20:54.696 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11076], 99.95th=[12780], 00:20:54.696 | 99.99th=[13435] 00:20:54.696 bw ( KiB/s): min=35336, max=36760, per=99.89%, avg=36156.00, stdev=602.59, samples=4 00:20:54.696 iops : min= 8834, max= 9190, avg=9039.00, stdev=150.65, samples=4 00:20:54.696 write: IOPS=9063, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec); 0 zone resets 00:20:54.696 slat (usec): min=2, max=146, avg= 2.78, stdev= 1.57 00:20:54.696 clat (usec): min=1441, max=11670, avg=6277.96, stdev=500.66 00:20:54.696 lat (usec): min=1450, max=11672, avg=6280.74, stdev=500.61 00:20:54.696 clat percentiles (usec): 00:20:54.696 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5932], 00:20:54.696 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:20:54.696 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:20:54.696 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9896], 99.95th=[10945], 00:20:54.696 | 99.99th=[11600] 00:20:54.696 bw ( KiB/s): min=36040, max=36480, per=100.00%, avg=36258.00, stdev=242.77, samples=4 00:20:54.696 iops : min= 9010, max= 9120, avg=9064.50, stdev=60.69, samples=4 00:20:54.696 lat (msec) : 2=0.02%, 4=0.10%, 10=99.76%, 20=0.11% 00:20:54.696 cpu : usr=57.56%, sys=36.86%, ctx=56, majf=0, minf=41 00:20:54.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:54.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.696 issued rwts: total=18153,18181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.696 00:20:54.696 Run status group 0 (all jobs): 00:20:54.696 READ: bw=35.3MiB/s (37.1MB/s), 35.3MiB/s-35.3MiB/s (37.1MB/s-37.1MB/s), io=70.9MiB (74.4MB), run=2006-2006msec 00:20:54.696 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.5MB), run=2006-2006msec 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:54.696 13:13:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.696 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:54.696 fio-3.35 00:20:54.696 Starting 1 thread 00:20:54.696 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.225 00:20:57.225 test: (groupid=0, jobs=1): err= 0: pid=3891849: Mon Jul 15 13:13:18 2024 00:20:57.225 read: IOPS=8260, BW=129MiB/s (135MB/s)(259MiB/2007msec) 00:20:57.225 slat (usec): min=2, max=111, avg= 3.77, stdev= 1.68 00:20:57.225 clat (usec): min=2272, max=17243, avg=9196.72, stdev=2181.57 00:20:57.225 lat (usec): min=2276, max=17247, avg=9200.49, stdev=2181.59 00:20:57.225 clat percentiles (usec): 00:20:57.225 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7373], 00:20:57.225 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:20:57.225 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11994], 95.00th=[12911], 00:20:57.225 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16909], 99.95th=[16909], 00:20:57.225 | 99.99th=[16909] 00:20:57.225 bw ( KiB/s): min=55424, max=77536, per=50.72%, avg=67032.00, stdev=10046.65, samples=4 00:20:57.225 iops : min= 3464, max= 4846, avg=4189.50, stdev=627.92, samples=4 00:20:57.225 write: IOPS=4807, BW=75.1MiB/s (78.8MB/s)(137MiB/1830msec); 0 zone resets 00:20:57.225 slat (usec): min=30, max=141, avg=33.66, stdev= 4.93 00:20:57.225 clat (usec): min=5548, max=18529, avg=11241.60, stdev=1983.51 00:20:57.225 lat (usec): min=5582, max=18563, avg=11275.26, stdev=1983.53 00:20:57.225 clat percentiles (usec): 00:20:57.225 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:20:57.225 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:20:57.225 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14091], 95.00th=[14877], 00:20:57.225 | 99.00th=[16057], 99.50th=[16712], 99.90th=[17433], 99.95th=[17695], 00:20:57.225 | 99.99th=[18482] 00:20:57.225 bw ( KiB/s): min=58400, max=79648, per=90.73%, avg=69792.00, stdev=9995.24, samples=4 00:20:57.225 iops : min= 3650, max= 4978, avg=4362.00, stdev=624.70, samples=4 00:20:57.225 lat (msec) : 4=0.13%, 10=54.12%, 20=45.75% 00:20:57.225 cpu : usr=73.53%, sys=23.38%, ctx=23, majf=0, minf=67 00:20:57.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.225 issued rwts: total=16579,8798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.225 00:20:57.225 Run status group 0 (all jobs): 00:20:57.225 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2007-2007msec 00:20:57.225 WRITE: bw=75.1MiB/s (78.8MB/s), 75.1MiB/s-75.1MiB/s (78.8MB/s-78.8MB/s), io=137MiB (144MB), run=1830-1830msec 00:20:57.225 13:13:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.483 13:13:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.483 rmmod nvme_tcp 00:20:57.484 rmmod nvme_fabrics 00:20:57.484 rmmod nvme_keyring 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3891032 ']' 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3891032 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3891032 ']' 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3891032 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3891032 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3891032' 00:20:57.484 killing process with pid 3891032 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3891032 00:20:57.484 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3891032 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.742 13:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.683 13:13:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.943 00:20:59.943 real 0m12.433s 00:20:59.943 user 0m36.996s 00:20:59.943 sys 0m4.073s 00:20:59.943 13:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.943 13:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.943 ************************************ 00:20:59.943 END TEST nvmf_fio_host 00:20:59.943 ************************************ 00:20:59.943 13:13:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:59.943 13:13:21 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:59.943 13:13:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:59.943 13:13:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.943 13:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:59.943 ************************************ 00:20:59.943 START TEST nvmf_failover 00:20:59.943 ************************************ 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:59.943 * Looking for test storage... 00:20:59.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:59.943 13:13:21 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.944 13:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:01.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:01.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.847 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:01.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:01.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:21:01.848 00:21:01.848 --- 10.0.0.2 ping statistics --- 00:21:01.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.848 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:01.848 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:21:02.106 00:21:02.106 --- 10.0.0.1 ping statistics --- 00:21:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.106 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3894043 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3894043 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3894043 ']' 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.106 13:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.106 [2024-07-15 13:13:23.626764] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:21:02.106 [2024-07-15 13:13:23.626849] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.106 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.106 [2024-07-15 13:13:23.692655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:02.364 [2024-07-15 13:13:23.817514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.364 [2024-07-15 13:13:23.817565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.364 [2024-07-15 13:13:23.817581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.364 [2024-07-15 13:13:23.817594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.364 [2024-07-15 13:13:23.817606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.364 [2024-07-15 13:13:23.817709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.364 [2024-07-15 13:13:23.817814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.364 [2024-07-15 13:13:23.817817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.932 13:13:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.932 13:13:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:02.932 13:13:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.932 13:13:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.932 13:13:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.932 13:13:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.932 13:13:24 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:03.190 [2024-07-15 13:13:24.818396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.190 13:13:24 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:03.448 Malloc0 00:21:03.448 13:13:25 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.707 13:13:25 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.965 13:13:25 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.223 [2024-07-15 13:13:25.839345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.223 13:13:25 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:04.481 [2024-07-15 13:13:26.108218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:04.481 13:13:26 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:04.738 [2024-07-15 13:13:26.352912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3894463 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3894463 /var/tmp/bdevperf.sock 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3894463 ']' 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.738 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.307 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.307 13:13:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:05.307 13:13:26 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.307 NVMe0n1 00:21:05.565 13:13:27 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.823 00:21:05.823 13:13:27 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3894598 00:21:05.823 13:13:27 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.823 13:13:27 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:06.758 13:13:28 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.017 13:13:28 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:10.299 13:13:31 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.299 00:21:10.299 13:13:31 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:10.557 [2024-07-15 13:13:32.237706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 [2024-07-15 13:13:32.237891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cf640 is same with the state(5) to be set 00:21:10.557 13:13:32 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:13.831 13:13:35 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.831 [2024-07-15 13:13:35.480169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.831 13:13:35 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:15.202 13:13:36 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:15.202 [2024-07-15 13:13:36.734060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 [2024-07-15 13:13:36.734565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cfe70 is same with the state(5) to be set 00:21:15.202 13:13:36 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3894598 00:21:21.815 { 00:21:21.815 "core_count": 1, 00:21:21.815 "test_results": [ 00:21:21.815 { 00:21:21.815 "job": "NVMe0n1", 00:21:21.815 "test_status": "finished", 00:21:21.815 "core_mask": "0x1", 00:21:21.815 "workload": "verify", 00:21:21.815 "verify_LBA_range_start": 0, 00:21:21.815 "verify_LBA_range_len": 16384, 00:21:21.815 "queue_depth": 128, 00:21:21.815 "io_size": 4096, 00:21:21.815 "runtime": 15.007458686828613, 00:21:21.815 "io_per_second": 8624.378050941203, 00:21:21.815 "MiB_per_second": 33.68897676148907, 00:21:21.815 "fails_per_second": 778.0797535412224, 00:21:21.815 "timeout_per_second": 0.0, 00:21:21.815 "average_latency_us": 13586.841941988336, 00:21:21.815 "min_latency_us": 782.7911111111111, 00:21:21.815 "max_latency_us": 22719.146666666667 00:21:21.815 } 00:21:21.815 ] 00:21:21.815 } 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3894463 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3894463 ']' 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3894463 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3894463 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3894463' 00:21:21.815 killing process with pid 3894463 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3894463 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3894463 00:21:21.815 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:21.815 [2024-07-15 13:13:26.417686] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:21:21.815 [2024-07-15 13:13:26.417765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894463 ] 00:21:21.815 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.815 [2024-07-15 13:13:26.478979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.815 [2024-07-15 13:13:26.586991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.815 Running I/O for 15 seconds... 00:21:21.815 [2024-07-15 13:13:28.570666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.570776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.570808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.570837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.570866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.570919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.570948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.570977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.570991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.815 [2024-07-15 13:13:28.571897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.815 [2024-07-15 13:13:28.571912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.571927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.571940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.571955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.571968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.571996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.816 [2024-07-15 13:13:28.572137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.816 [2024-07-15 13:13:28.572165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.816 [2024-07-15 13:13:28.572197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.816 [2024-07-15 13:13:28.572680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.572726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.572738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.816 [2024-07-15 13:13:28.572816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.816 [2024-07-15 13:13:28.572844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.816 [2024-07-15 13:13:28.572870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.816 [2024-07-15 13:13:28.572904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.572917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d0f0 is same with the state(5) to be set 00:21:21.816 [2024-07-15 13:13:28.573073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79680 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79688 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79704 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:21:21.816 [2024-07-15 13:13:28.573699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.816 [2024-07-15 13:13:28.573711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.816 [2024-07-15 13:13:28.573728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.816 [2024-07-15 13:13:28.573740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79760 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.573752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.573765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.573775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.573786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.573798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.573811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.573821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.573832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.573844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.573857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.573867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.573884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.573902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.573915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.573926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.573937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.573949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.573962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.573973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.573984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.573996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.574958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.574971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.574981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.574992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.575031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.575042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.575078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.575089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.575125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.575136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.575172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.575182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.575217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.575228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.575264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.575275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.817 [2024-07-15 13:13:28.575310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.817 [2024-07-15 13:13:28.575320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:21:21.817 [2024-07-15 13:13:28.575333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.817 [2024-07-15 13:13:28.575345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.575965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.575976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.575987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.575999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.818 [2024-07-15 13:13:28.576641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:21:21.818 [2024-07-15 13:13:28.576653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.818 [2024-07-15 13:13:28.576671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.818 [2024-07-15 13:13:28.576682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.576693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.576706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.576718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.576729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.576739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.576752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.576764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.576780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.576792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.576804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.576817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.576828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.576842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.576855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.576868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.576884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.576896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.576909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.576922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.576933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.576943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.576956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.576969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.576979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.576990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.577002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.577015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.577026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.577036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.577049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.577062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.577073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.585955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.585967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.585980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.585990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.586001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.586013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.586025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.586036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.586047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.586059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.586071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.586081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.586092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.586105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.586117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.586128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.586138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.586150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.586163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.586173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.586184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.586196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.586208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.819 [2024-07-15 13:13:28.586219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.819 [2024-07-15 13:13:28.586230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:21:21.819 [2024-07-15 13:13:28.586242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.819 [2024-07-15 13:13:28.586254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79520 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79528 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79536 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79544 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.586951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.586964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.586975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.586986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.587002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.587015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.587032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.587044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.587056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.587069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.587080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.587090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.587103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.587116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.820 [2024-07-15 13:13:28.587126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.820 [2024-07-15 13:13:28.587137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:21:21.820 [2024-07-15 13:13:28.587149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:28.587215] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bc3390 was disconnected and freed. reset controller. 00:21:21.820 [2024-07-15 13:13:28.587233] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:21.820 [2024-07-15 13:13:28.587248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:21.820 [2024-07-15 13:13:28.587309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d0f0 (9): Bad file descriptor 00:21:21.820 [2024-07-15 13:13:28.590640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:21.820 [2024-07-15 13:13:28.662705] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:21.820 [2024-07-15 13:13:32.240445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.240985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.240998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.241013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.241030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.241046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.241059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.241074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.241087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.241102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.241115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.241130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.820 [2024-07-15 13:13:32.241143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.820 [2024-07-15 13:13:32.241157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.241827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.821 [2024-07-15 13:13:32.241856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.821 [2024-07-15 13:13:32.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.821 [2024-07-15 13:13:32.241924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.821 [2024-07-15 13:13:32.241952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.821 [2024-07-15 13:13:32.241980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.241994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.821 [2024-07-15 13:13:32.242477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.821 [2024-07-15 13:13:32.242529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90624 len:8 PRP1 0x0 PRP2 0x0 00:21:21.821 [2024-07-15 13:13:32.242542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.821 [2024-07-15 13:13:32.242626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.821 [2024-07-15 13:13:32.242655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.821 [2024-07-15 13:13:32.242683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.821 [2024-07-15 13:13:32.242697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.822 [2024-07-15 13:13:32.242710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.242723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d0f0 is same with the state(5) to be set 00:21:21.822 [2024-07-15 13:13:32.242892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.242912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.242924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90632 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.242938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.242954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.242966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.242978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90640 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.242990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90648 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90656 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90664 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90672 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90680 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90688 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90696 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90704 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90712 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90720 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90728 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90736 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90744 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90752 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90760 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90768 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90776 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90784 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90792 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.243955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90800 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.243967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.243980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.243990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90808 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90816 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90824 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90832 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90840 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90848 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90856 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90864 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90872 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.822 [2024-07-15 13:13:32.244430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90880 len:8 PRP1 0x0 PRP2 0x0 00:21:21.822 [2024-07-15 13:13:32.244443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.822 [2024-07-15 13:13:32.244456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.822 [2024-07-15 13:13:32.244467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90888 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90896 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90904 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90912 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90920 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90928 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90936 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90944 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90952 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90000 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.244951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.244962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90008 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.244978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.244992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90016 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90024 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90032 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90040 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90048 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90056 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90064 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90072 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90080 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90088 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90096 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90104 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90960 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90968 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90976 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90120 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90128 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90136 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90144 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.245954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.245965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90152 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.245977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.245989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.246000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.246010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90160 len:8 PRP1 0x0 PRP2 0x0 00:21:21.823 [2024-07-15 13:13:32.246022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.823 [2024-07-15 13:13:32.246035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.823 [2024-07-15 13:13:32.246046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.823 [2024-07-15 13:13:32.246057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90168 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90176 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90184 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90192 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90200 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90208 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90216 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90224 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90232 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90240 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90248 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90256 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90264 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90272 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90280 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90288 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90296 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90304 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.246920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.246930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90312 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.246943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.246956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90320 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90328 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90336 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90344 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90352 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90360 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90368 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90376 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90384 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90392 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.824 [2024-07-15 13:13:32.253619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90400 len:8 PRP1 0x0 PRP2 0x0 00:21:21.824 [2024-07-15 13:13:32.253631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.824 [2024-07-15 13:13:32.253643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.824 [2024-07-15 13:13:32.253654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.253664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90408 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.253676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.253689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.253699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.253710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90416 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.253722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.253740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.253752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.253763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90424 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.253775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.253787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.253798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.253808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90432 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.253820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.253833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.253843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.253854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90440 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.253866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.253891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.253904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.253914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90448 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.253926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.253939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.253949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.253960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90456 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.253972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.253984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.253995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90464 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90472 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90480 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89960 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89968 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89976 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89984 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89992 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90488 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90496 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90504 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90512 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90520 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90528 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90536 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90544 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90552 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90560 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90568 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90576 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90584 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.254962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.254973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.254983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90592 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.254998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.255011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.255022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.255033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90600 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.255046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.255059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.255070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.255081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90608 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.255093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.255106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.255117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.255128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90616 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.255140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.255160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.825 [2024-07-15 13:13:32.255172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.825 [2024-07-15 13:13:32.255183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90624 len:8 PRP1 0x0 PRP2 0x0 00:21:21.825 [2024-07-15 13:13:32.255199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.825 [2024-07-15 13:13:32.255268] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d67d80 was disconnected and freed. reset controller. 00:21:21.825 [2024-07-15 13:13:32.255286] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:21.825 [2024-07-15 13:13:32.255302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:21.825 [2024-07-15 13:13:32.255353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d0f0 (9): Bad file descriptor 00:21:21.826 [2024-07-15 13:13:32.258633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:21.826 [2024-07-15 13:13:32.296821] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:21.826 [2024-07-15 13:13:36.735544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.735984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.735997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.826 [2024-07-15 13:13:36.736704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.736982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.736998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.826 [2024-07-15 13:13:36.737223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.826 [2024-07-15 13:13:36.737238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.737986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.737999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.738027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.738055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.738087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.827 [2024-07-15 13:13:36.738116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22920 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22928 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22936 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22952 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22960 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22968 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22984 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22992 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23000 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.827 [2024-07-15 13:13:36.738692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.827 [2024-07-15 13:13:36.738703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:8 PRP1 0x0 PRP2 0x0 00:21:21.827 [2024-07-15 13:13:36.738715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.827 [2024-07-15 13:13:36.738728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.738738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.738749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23016 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.738761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.738774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.738784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.738795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23024 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.738808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.738820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.738831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.738842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23032 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.738858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.738871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.738890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.738901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.738914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.738927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.738943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.738954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23048 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.738967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.738980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.738991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23056 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23064 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23080 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23088 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23096 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23112 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23120 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23128 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23144 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23152 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23160 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23176 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23184 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23192 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22504 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.739962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.739975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.739985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.739996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22512 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.740008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.740021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.740031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.740041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22520 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.740054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.740066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:21.828 [2024-07-15 13:13:36.740077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:21.828 [2024-07-15 13:13:36.740088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:8 PRP1 0x0 PRP2 0x0 00:21:21.828 [2024-07-15 13:13:36.740100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.740156] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d67b70 was disconnected and freed. reset controller. 00:21:21.828 [2024-07-15 13:13:36.740173] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:21.828 [2024-07-15 13:13:36.740206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.828 [2024-07-15 13:13:36.740224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.740238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.828 [2024-07-15 13:13:36.740251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.740265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.828 [2024-07-15 13:13:36.740277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.740290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.828 [2024-07-15 13:13:36.740302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.828 [2024-07-15 13:13:36.740315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:21.828 [2024-07-15 13:13:36.743555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:21.828 [2024-07-15 13:13:36.743593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d0f0 (9): Bad file descriptor 00:21:21.828 [2024-07-15 13:13:36.933520] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:21.828 00:21:21.828 Latency(us) 00:21:21.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.828 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:21.828 Verification LBA range: start 0x0 length 0x4000 00:21:21.828 NVMe0n1 : 15.01 8624.38 33.69 778.08 0.00 13586.84 782.79 22719.15 00:21:21.829 =================================================================================================================== 00:21:21.829 Total : 8624.38 33.69 778.08 0.00 13586.84 782.79 22719.15 00:21:21.829 Received shutdown signal, test time was about 15.000000 seconds 00:21:21.829 00:21:21.829 Latency(us) 00:21:21.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.829 =================================================================================================================== 00:21:21.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3896318 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3896318 /var/tmp/bdevperf.sock 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3896318 ']' 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.829 13:13:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:21.829 13:13:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.829 13:13:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:21.829 13:13:43 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.829 [2024-07-15 13:13:43.379114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.829 13:13:43 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:22.086 [2024-07-15 13:13:43.623771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:22.086 13:13:43 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.651 NVMe0n1 00:21:22.651 13:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.908 00:21:22.908 13:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.164 00:21:23.164 13:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.164 13:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:23.421 13:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.679 13:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:26.964 13:13:48 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.964 13:13:48 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:26.964 13:13:48 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3896988 00:21:26.964 13:13:48 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.964 13:13:48 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3896988 00:21:28.342 { 00:21:28.342 "core_count": 1, 00:21:28.342 "test_results": [ 00:21:28.342 { 00:21:28.342 "job": "NVMe0n1", 00:21:28.342 "test_status": "finished", 00:21:28.342 "core_mask": "0x1", 00:21:28.342 "workload": "verify", 00:21:28.342 "verify_LBA_range_start": 0, 00:21:28.342 "verify_LBA_range_len": 16384, 00:21:28.342 "queue_depth": 128, 00:21:28.342 "io_size": 4096, 00:21:28.342 "runtime": 1.0056450366973877, 00:21:28.342 "io_per_second": 8233.521769610548, 00:21:28.342 "MiB_per_second": 32.162194412541204, 00:21:28.342 "fails_per_second": 0.0, 00:21:28.342 "timeout_per_second": 0.0, 00:21:28.342 "average_latency_us": 15473.944138844161, 00:21:28.342 "min_latency_us": 3070.482962962963, 00:21:28.342 "max_latency_us": 16893.724444444444 00:21:28.342 } 00:21:28.342 ] 00:21:28.342 } 00:21:28.342 13:13:49 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:28.342 [2024-07-15 13:13:42.850108] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:21:28.342 [2024-07-15 13:13:42.850200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896318 ] 00:21:28.342 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.342 [2024-07-15 13:13:42.908813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.342 [2024-07-15 13:13:43.015784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.342 [2024-07-15 13:13:45.209529] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:28.342 [2024-07-15 13:13:45.209631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.342 [2024-07-15 13:13:45.209653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.342 [2024-07-15 13:13:45.209684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.342 [2024-07-15 13:13:45.209697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.342 [2024-07-15 13:13:45.209710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.342 [2024-07-15 13:13:45.209725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.342 [2024-07-15 13:13:45.209739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:28.342 [2024-07-15 13:13:45.209753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:28.342 [2024-07-15 13:13:45.209766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:28.342 [2024-07-15 13:13:45.209808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:28.342 [2024-07-15 13:13:45.209840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12070f0 (9): Bad file descriptor 00:21:28.342 [2024-07-15 13:13:45.222315] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:28.342 Running I/O for 1 seconds... 00:21:28.342 00:21:28.342 Latency(us) 00:21:28.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.342 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:28.342 Verification LBA range: start 0x0 length 0x4000 00:21:28.342 NVMe0n1 : 1.01 8233.52 32.16 0.00 0.00 15473.94 3070.48 16893.72 00:21:28.342 =================================================================================================================== 00:21:28.342 Total : 8233.52 32.16 0.00 0.00 15473.94 3070.48 16893.72 00:21:28.342 13:13:49 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.342 13:13:49 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:28.342 13:13:49 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.600 13:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.600 13:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:28.858 13:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.117 13:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:32.411 13:13:53 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.411 13:13:53 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3896318 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3896318 ']' 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3896318 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3896318 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3896318' 00:21:32.411 killing process with pid 3896318 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3896318 00:21:32.411 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3896318 00:21:32.669 13:13:54 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:32.669 13:13:54 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.928 rmmod nvme_tcp 00:21:32.928 rmmod nvme_fabrics 00:21:32.928 rmmod nvme_keyring 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3894043 ']' 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3894043 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3894043 ']' 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3894043 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3894043 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3894043' 00:21:32.928 killing process with pid 3894043 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3894043 00:21:32.928 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3894043 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.498 13:13:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.405 13:13:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:35.405 00:21:35.405 real 0m35.520s 00:21:35.405 user 2m5.063s 00:21:35.405 sys 0m5.885s 00:21:35.405 13:13:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.405 13:13:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.405 ************************************ 00:21:35.405 END TEST nvmf_failover 00:21:35.405 ************************************ 00:21:35.405 13:13:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:35.405 13:13:56 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:35.405 13:13:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:35.405 13:13:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.405 13:13:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.405 ************************************ 00:21:35.405 START TEST nvmf_host_discovery 00:21:35.405 ************************************ 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:35.405 * Looking for test storage... 00:21:35.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.405 13:13:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:35.406 13:13:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.304 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.562 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:37.563 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:37.563 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:37.563 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:37.563 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:21:37.563 00:21:37.563 --- 10.0.0.2 ping statistics --- 00:21:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.563 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:21:37.563 00:21:37.563 --- 10.0.0.1 ping statistics --- 00:21:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.563 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3899700 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3899700 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3899700 ']' 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.563 13:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.563 [2024-07-15 13:13:59.221358] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:21:37.563 [2024-07-15 13:13:59.221441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.563 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.821 [2024-07-15 13:13:59.288482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.821 [2024-07-15 13:13:59.405655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.821 [2024-07-15 13:13:59.405715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.821 [2024-07-15 13:13:59.405731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.821 [2024-07-15 13:13:59.405743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.821 [2024-07-15 13:13:59.405755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.821 [2024-07-15 13:13:59.405784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 [2024-07-15 13:14:00.208117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 [2024-07-15 13:14:00.216312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 null0 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 null1 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3899852 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3899852 /tmp/host.sock 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3899852 ']' 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:38.756 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.756 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 [2024-07-15 13:14:00.290949] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:21:38.756 [2024-07-15 13:14:00.291020] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899852 ] 00:21:38.756 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.756 [2024-07-15 13:14:00.352219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.029 [2024-07-15 13:14:00.469728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.029 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.030 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 [2024-07-15 13:14:00.890130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:39.329 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:39.588 13:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:40.156 [2024-07-15 13:14:01.616254] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:40.156 [2024-07-15 13:14:01.616290] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:40.156 [2024-07-15 13:14:01.616310] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:40.156 [2024-07-15 13:14:01.702600] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:40.156 [2024-07-15 13:14:01.807355] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:40.156 [2024-07-15 13:14:01.807378] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.416 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:40.674 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.675 [2024-07-15 13:14:02.338309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:40.675 [2024-07-15 13:14:02.339018] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:40.675 [2024-07-15 13:14:02.339066] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:40.675 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:40.933 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 [2024-07-15 13:14:02.466940] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:40.934 13:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:40.934 [2024-07-15 13:14:02.570603] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:40.934 [2024-07-15 13:14:02.570628] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:40.934 [2024-07-15 13:14:02.570638] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.870 [2024-07-15 13:14:03.554847] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:41.870 [2024-07-15 13:14:03.554894] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:41.870 [2024-07-15 13:14:03.558874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.870 [2024-07-15 13:14:03.558913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.870 [2024-07-15 13:14:03.558954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.870 [2024-07-15 13:14:03.558978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.870 [2024-07-15 13:14:03.559002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.870 [2024-07-15 13:14:03.559023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.870 [2024-07-15 13:14:03.559038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.870 [2024-07-15 13:14:03.559052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.870 [2024-07-15 13:14:03.559067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:41.870 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:41.870 [2024-07-15 13:14:03.568874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.128 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.128 [2024-07-15 13:14:03.578917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:42.128 [2024-07-15 13:14:03.579141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.128 [2024-07-15 13:14:03.579181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027c00 with addr=10.0.0.2, port=4420 00:21:42.128 [2024-07-15 13:14:03.579198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:42.129 [2024-07-15 13:14:03.579221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.129 [2024-07-15 13:14:03.579242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:42.129 [2024-07-15 13:14:03.579256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:42.129 [2024-07-15 13:14:03.579270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:42.129 [2024-07-15 13:14:03.579290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.129 [2024-07-15 13:14:03.588993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:42.129 [2024-07-15 13:14:03.589188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.129 [2024-07-15 13:14:03.589216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027c00 with addr=10.0.0.2, port=4420 00:21:42.129 [2024-07-15 13:14:03.589232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:42.129 [2024-07-15 13:14:03.589255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.129 [2024-07-15 13:14:03.589275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:42.129 [2024-07-15 13:14:03.589289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:42.129 [2024-07-15 13:14:03.589302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:42.129 [2024-07-15 13:14:03.589321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:42.129 [2024-07-15 13:14:03.599075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:42.129 [2024-07-15 13:14:03.599322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.129 [2024-07-15 13:14:03.599354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027c00 with addr=10.0.0.2, port=4420 00:21:42.129 [2024-07-15 13:14:03.599372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:42.129 [2024-07-15 13:14:03.599397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.129 [2024-07-15 13:14:03.599419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:42.129 [2024-07-15 13:14:03.599441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:42.129 [2024-07-15 13:14:03.599457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:42.129 [2024-07-15 13:14:03.599478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.129 [2024-07-15 13:14:03.609167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:42.129 [2024-07-15 13:14:03.609373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.129 [2024-07-15 13:14:03.609406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027c00 with addr=10.0.0.2, port=4420 00:21:42.129 [2024-07-15 13:14:03.609425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:42.129 [2024-07-15 13:14:03.609450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.129 [2024-07-15 13:14:03.609473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:42.129 [2024-07-15 13:14:03.609488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:42.129 [2024-07-15 13:14:03.609503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:42.129 [2024-07-15 13:14:03.609524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.129 [2024-07-15 13:14:03.619247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:42.129 [2024-07-15 13:14:03.619482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.129 [2024-07-15 13:14:03.619510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027c00 with addr=10.0.0.2, port=4420 00:21:42.129 [2024-07-15 13:14:03.619526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:42.129 [2024-07-15 13:14:03.619549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.129 [2024-07-15 13:14:03.619569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:42.129 [2024-07-15 13:14:03.619583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:42.129 [2024-07-15 13:14:03.619596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:42.129 [2024-07-15 13:14:03.619615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.129 [2024-07-15 13:14:03.629326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:42.129 [2024-07-15 13:14:03.629543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.129 [2024-07-15 13:14:03.629574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027c00 with addr=10.0.0.2, port=4420 00:21:42.129 [2024-07-15 13:14:03.629591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:42.129 [2024-07-15 13:14:03.629616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.129 [2024-07-15 13:14:03.629645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:42.129 [2024-07-15 13:14:03.629662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:42.129 [2024-07-15 13:14:03.629676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:42.129 [2024-07-15 13:14:03.629698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:42.129 [2024-07-15 13:14:03.639399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:42.129 [2024-07-15 13:14:03.639622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.129 [2024-07-15 13:14:03.639650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027c00 with addr=10.0.0.2, port=4420 00:21:42.129 [2024-07-15 13:14:03.639666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027c00 is same with the state(5) to be set 00:21:42.129 [2024-07-15 13:14:03.639688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027c00 (9): Bad file descriptor 00:21:42.129 [2024-07-15 13:14:03.639709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:42.129 [2024-07-15 13:14:03.639723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:42.129 [2024-07-15 13:14:03.639737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:42.129 [2024-07-15 13:14:03.639774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.129 [2024-07-15 13:14:03.642813] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:42.129 [2024-07-15 13:14:03.642845] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:42.129 13:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.093 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.352 13:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.288 [2024-07-15 13:14:05.927069] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:44.288 [2024-07-15 13:14:05.927091] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:44.288 [2024-07-15 13:14:05.927110] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:44.547 [2024-07-15 13:14:06.014427] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:44.547 [2024-07-15 13:14:06.082502] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:44.547 [2024-07-15 13:14:06.082539] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.547 request: 00:21:44.547 { 00:21:44.547 "name": "nvme", 00:21:44.547 "trtype": "tcp", 00:21:44.547 "traddr": "10.0.0.2", 00:21:44.547 "adrfam": "ipv4", 00:21:44.547 "trsvcid": "8009", 00:21:44.547 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:44.547 "wait_for_attach": true, 00:21:44.547 "method": "bdev_nvme_start_discovery", 00:21:44.547 "req_id": 1 00:21:44.547 } 00:21:44.547 Got JSON-RPC error response 00:21:44.547 response: 00:21:44.547 { 00:21:44.547 "code": -17, 00:21:44.547 "message": "File exists" 00:21:44.547 } 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:44.547 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 request: 00:21:44.548 { 00:21:44.548 "name": "nvme_second", 00:21:44.548 "trtype": "tcp", 00:21:44.548 "traddr": "10.0.0.2", 00:21:44.548 "adrfam": "ipv4", 00:21:44.548 "trsvcid": "8009", 00:21:44.548 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:44.548 "wait_for_attach": true, 00:21:44.548 "method": "bdev_nvme_start_discovery", 00:21:44.548 "req_id": 1 00:21:44.548 } 00:21:44.548 Got JSON-RPC error response 00:21:44.548 response: 00:21:44.548 { 00:21:44.548 "code": -17, 00:21:44.548 "message": "File exists" 00:21:44.548 } 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.807 13:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.743 [2024-07-15 13:14:07.293978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.743 [2024-07-15 13:14:07.294024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1044490 with addr=10.0.0.2, port=8010 00:21:45.743 [2024-07-15 13:14:07.294049] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:45.743 [2024-07-15 13:14:07.294063] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:45.743 [2024-07-15 13:14:07.294075] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:46.682 [2024-07-15 13:14:08.296358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.682 [2024-07-15 13:14:08.296406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1044490 with addr=10.0.0.2, port=8010 00:21:46.682 [2024-07-15 13:14:08.296425] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:46.682 [2024-07-15 13:14:08.296436] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:46.682 [2024-07-15 13:14:08.296464] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:47.617 [2024-07-15 13:14:09.298634] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:47.617 request: 00:21:47.617 { 00:21:47.617 "name": "nvme_second", 00:21:47.617 "trtype": "tcp", 00:21:47.617 "traddr": "10.0.0.2", 00:21:47.617 "adrfam": "ipv4", 00:21:47.617 "trsvcid": "8010", 00:21:47.617 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:47.617 "wait_for_attach": false, 00:21:47.617 "attach_timeout_ms": 3000, 00:21:47.617 "method": "bdev_nvme_start_discovery", 00:21:47.617 "req_id": 1 00:21:47.617 } 00:21:47.617 Got JSON-RPC error response 00:21:47.617 response: 00:21:47.617 { 00:21:47.617 "code": -110, 00:21:47.617 "message": "Connection timed out" 00:21:47.617 } 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:47.617 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3899852 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.876 rmmod nvme_tcp 00:21:47.876 rmmod nvme_fabrics 00:21:47.876 rmmod nvme_keyring 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3899700 ']' 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3899700 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3899700 ']' 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3899700 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3899700 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3899700' 00:21:47.876 killing process with pid 3899700 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3899700 00:21:47.876 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3899700 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.136 13:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:50.674 00:21:50.674 real 0m14.810s 00:21:50.674 user 0m21.766s 00:21:50.674 sys 0m2.965s 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.674 ************************************ 00:21:50.674 END TEST nvmf_host_discovery 00:21:50.674 ************************************ 00:21:50.674 13:14:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:50.674 13:14:11 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:50.674 13:14:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:50.674 13:14:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.674 13:14:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.674 ************************************ 00:21:50.674 START TEST nvmf_host_multipath_status 00:21:50.674 ************************************ 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:50.674 * Looking for test storage... 00:21:50.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.674 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.675 13:14:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:52.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:52.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:52.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:52.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.571 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.572 13:14:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:21:52.572 00:21:52.572 --- 10.0.0.2 ping statistics --- 00:21:52.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.572 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:21:52.572 00:21:52.572 --- 10.0.0.1 ping statistics --- 00:21:52.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.572 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3903023 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3903023 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3903023 ']' 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.572 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:52.572 [2024-07-15 13:14:14.106145] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:21:52.572 [2024-07-15 13:14:14.106249] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.572 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.572 [2024-07-15 13:14:14.172829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:52.829 [2024-07-15 13:14:14.289694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.829 [2024-07-15 13:14:14.289754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.829 [2024-07-15 13:14:14.289780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.829 [2024-07-15 13:14:14.289794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.829 [2024-07-15 13:14:14.289805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.829 [2024-07-15 13:14:14.289903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.829 [2024-07-15 13:14:14.289910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3903023 00:21:52.829 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:53.086 [2024-07-15 13:14:14.649893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.086 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:53.343 Malloc0 00:21:53.343 13:14:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:53.600 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.861 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.167 [2024-07-15 13:14:15.676898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.167 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.425 [2024-07-15 13:14:15.933694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3903313 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3903313 /var/tmp/bdevperf.sock 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3903313 ']' 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.425 13:14:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:54.682 13:14:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.682 13:14:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:54.682 13:14:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:54.938 13:14:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:55.505 Nvme0n1 00:21:55.505 13:14:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:56.074 Nvme0n1 00:21:56.074 13:14:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:56.074 13:14:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:57.981 13:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:57.982 13:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:58.239 13:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:58.499 13:14:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:59.433 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:59.433 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:59.433 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.433 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:59.692 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.692 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:59.692 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.692 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:59.950 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:59.950 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:59.950 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.950 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:00.208 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.208 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:00.208 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.208 13:14:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:00.466 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.466 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:00.466 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.466 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:00.724 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.724 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:00.724 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.724 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:00.982 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.982 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:00.982 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:01.240 13:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:01.499 13:14:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:02.431 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:02.431 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:02.431 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.431 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:02.689 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:02.689 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:02.689 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.689 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:02.947 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.947 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:02.947 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.947 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:03.205 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.205 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:03.205 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.205 13:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:03.464 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.464 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:03.464 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.464 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:03.722 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.722 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:03.722 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.722 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:03.981 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.981 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:03.981 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:04.239 13:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:04.499 13:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:05.875 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:05.876 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:05.876 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.876 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:05.876 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.876 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:05.876 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.876 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:06.132 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.132 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:06.132 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.132 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:06.389 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.389 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:06.389 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.389 13:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:06.646 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.646 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:06.646 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.646 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:06.903 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.903 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:06.903 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.903 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:07.160 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.160 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:07.160 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:07.417 13:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:07.675 13:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:08.656 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:08.656 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:08.656 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.656 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.914 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.914 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:08.914 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.914 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:09.172 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.172 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:09.172 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.172 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:09.430 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.430 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:09.430 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.430 13:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.688 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.688 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:09.688 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.688 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:09.946 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.946 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:09.946 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.946 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:10.205 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.205 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:10.205 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:10.463 13:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:10.722 13:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:11.659 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:11.659 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:11.659 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.659 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:11.917 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.917 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:11.917 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.917 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:12.174 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.174 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:12.174 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.174 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:12.432 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.432 13:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:12.432 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.432 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:12.689 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.689 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:12.689 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.689 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:12.946 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.946 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:12.946 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.946 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:13.204 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.204 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:13.204 13:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:13.462 13:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:13.721 13:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:14.660 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:14.660 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:14.660 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.660 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.917 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.917 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:14.917 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.917 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:15.174 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.174 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:15.175 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.175 13:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.432 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.432 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.432 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.432 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.689 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.689 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:15.689 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.689 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.946 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.946 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.946 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.946 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:16.206 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.206 13:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:16.463 13:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:16.463 13:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:16.720 13:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:16.977 13:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:17.912 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:17.912 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:17.912 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.912 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:18.169 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.170 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:18.170 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.170 13:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:18.427 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.427 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:18.427 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.427 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.684 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.684 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.684 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.684 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.941 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.941 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.941 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.941 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:19.198 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.198 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:19.198 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.198 13:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:19.456 13:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.456 13:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:19.456 13:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:19.713 13:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:19.970 13:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:20.902 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:20.902 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.902 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.902 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:21.160 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.160 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:21.160 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.160 13:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.423 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.423 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.423 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.423 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.742 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.742 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.742 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.742 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:22.002 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.002 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:22.002 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.002 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.261 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.261 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:22.261 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.261 13:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.518 13:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.518 13:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:22.518 13:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:22.775 13:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:23.034 13:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:23.969 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:23.969 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:23.969 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.969 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.227 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.227 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:24.227 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.227 13:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.485 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.485 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.485 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.485 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.743 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.743 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.743 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.743 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:25.001 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.001 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:25.001 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.001 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.259 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.259 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:25.259 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.259 13:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.517 13:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.517 13:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:25.517 13:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:25.774 13:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:26.033 13:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:26.969 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:26.969 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:26.969 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.969 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.225 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.225 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:27.225 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.225 13:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.506 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.506 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.506 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.506 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.762 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.762 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.762 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.762 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.018 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.018 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:28.018 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.018 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.275 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.275 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:28.275 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.275 13:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.533 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.533 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3903313 00:22:28.533 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3903313 ']' 00:22:28.533 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3903313 00:22:28.533 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:28.533 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.533 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3903313 00:22:28.791 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:28.791 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:28.791 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3903313' 00:22:28.791 killing process with pid 3903313 00:22:28.791 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3903313 00:22:28.791 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3903313 00:22:28.791 { 00:22:28.791 "core_count": 1, 00:22:28.791 "test_results": [ 00:22:28.791 { 00:22:28.791 "job": "Nvme0n1", 00:22:28.791 "test_status": "terminated", 00:22:28.791 "core_mask": "0x4", 00:22:28.791 "workload": "verify", 00:22:28.791 "verify_LBA_range_start": 0, 00:22:28.791 "verify_LBA_range_len": 16384, 00:22:28.791 "queue_depth": 128, 00:22:28.791 "io_size": 4096, 00:22:28.791 "runtime": 32.523677825927734, 00:22:28.791 "io_per_second": 7970.00904909983, 00:22:28.791 "MiB_per_second": 31.13284784804621, 00:22:28.791 "fails_per_second": 0.0, 00:22:28.791 "timeout_per_second": 0.0, 00:22:28.791 "average_latency_us": 16032.421396535223, 00:22:28.791 "min_latency_us": 250.3111111111111, 00:22:28.791 "max_latency_us": 4026531.84 00:22:28.791 } 00:22:28.791 ] 00:22:28.791 } 00:22:29.052 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3903313 00:22:29.052 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:29.052 [2024-07-15 13:14:16.002700] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:22:29.052 [2024-07-15 13:14:16.002794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903313 ] 00:22:29.052 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.052 [2024-07-15 13:14:16.061040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.052 [2024-07-15 13:14:16.168661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.052 Running I/O for 90 seconds... 00:22:29.052 [2024-07-15 13:14:31.969641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.052 [2024-07-15 13:14:31.969698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.969769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.969792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.969816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.969834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.969885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.969905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.969943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.969960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.969983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.969999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.052 [2024-07-15 13:14:31.970623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.970982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.970998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.971977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.971993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.972016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.972032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.972055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.972070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.972107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.972123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.052 [2024-07-15 13:14:31.972145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.052 [2024-07-15 13:14:31.972161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.972964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.972986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.973966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.973982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.974956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.974991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:31.975589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.053 [2024-07-15 13:14:31.975646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.053 [2024-07-15 13:14:31.975700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.053 [2024-07-15 13:14:31.975755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.053 [2024-07-15 13:14:31.975810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.053 [2024-07-15 13:14:31.975866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.053 [2024-07-15 13:14:31.975951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:31.975989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.053 [2024-07-15 13:14:31.976012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:47.637848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:47.637924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:47.637993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:47.638017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:47.638042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.053 [2024-07-15 13:14:47.638068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.053 [2024-07-15 13:14:47.638091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.638953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.638977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.054 [2024-07-15 13:14:47.639540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.054 [2024-07-15 13:14:47.639781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.054 [2024-07-15 13:14:47.639820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.054 [2024-07-15 13:14:47.639873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.639965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.639983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.054 [2024-07-15 13:14:47.640098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.054 [2024-07-15 13:14:47.640139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.054 [2024-07-15 13:14:47.640407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.054 [2024-07-15 13:14:47.640424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.054 Received shutdown signal, test time was about 32.524475 seconds 00:22:29.054 00:22:29.054 Latency(us) 00:22:29.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.054 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:29.054 Verification LBA range: start 0x0 length 0x4000 00:22:29.054 Nvme0n1 : 32.52 7970.01 31.13 0.00 0.00 16032.42 250.31 4026531.84 00:22:29.054 =================================================================================================================== 00:22:29.054 Total : 7970.01 31.13 0.00 0.00 16032.42 250.31 4026531.84 00:22:29.054 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.313 rmmod nvme_tcp 00:22:29.313 rmmod nvme_fabrics 00:22:29.313 rmmod nvme_keyring 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3903023 ']' 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3903023 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3903023 ']' 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3903023 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3903023 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3903023' 00:22:29.313 killing process with pid 3903023 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3903023 00:22:29.313 13:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3903023 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.571 13:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.106 13:14:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:32.106 00:22:32.106 real 0m41.335s 00:22:32.106 user 2m4.803s 00:22:32.106 sys 0m10.465s 00:22:32.106 13:14:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:32.106 13:14:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:32.106 ************************************ 00:22:32.106 END TEST nvmf_host_multipath_status 00:22:32.106 ************************************ 00:22:32.106 13:14:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:32.106 13:14:53 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:32.106 13:14:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:32.106 13:14:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:32.106 13:14:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.106 ************************************ 00:22:32.106 START TEST nvmf_discovery_remove_ifc 00:22:32.106 ************************************ 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:32.106 * Looking for test storage... 00:22:32.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.106 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.107 13:14:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.483 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.484 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.484 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:33.484 00:22:33.484 --- 10.0.0.2 ping statistics --- 00:22:33.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.484 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:33.484 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:33.742 00:22:33.742 --- 10.0.0.1 ping statistics --- 00:22:33.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.742 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3909402 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3909402 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3909402 ']' 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.742 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.742 [2024-07-15 13:14:55.255995] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:22:33.742 [2024-07-15 13:14:55.256073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.742 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.742 [2024-07-15 13:14:55.321882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.742 [2024-07-15 13:14:55.428305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.742 [2024-07-15 13:14:55.428361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.742 [2024-07-15 13:14:55.428383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.742 [2024-07-15 13:14:55.428394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.742 [2024-07-15 13:14:55.428403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.742 [2024-07-15 13:14:55.428428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.999 [2024-07-15 13:14:55.573951] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.999 [2024-07-15 13:14:55.582113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:33.999 null0 00:22:33.999 [2024-07-15 13:14:55.614040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3909544 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3909544 /tmp/host.sock 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3909544 ']' 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:33.999 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.999 13:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.999 [2024-07-15 13:14:55.681607] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:22:34.000 [2024-07-15 13:14:55.681685] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909544 ] 00:22:34.257 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.257 [2024-07-15 13:14:55.747798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.257 [2024-07-15 13:14:55.863137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.187 13:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.181 [2024-07-15 13:14:57.804831] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.181 [2024-07-15 13:14:57.804858] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.181 [2024-07-15 13:14:57.804891] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.440 [2024-07-15 13:14:57.934359] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:36.699 [2024-07-15 13:14:58.156692] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:36.699 [2024-07-15 13:14:58.156762] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:36.699 [2024-07-15 13:14:58.156811] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:36.699 [2024-07-15 13:14:58.156837] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:36.699 [2024-07-15 13:14:58.156864] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.699 [2024-07-15 13:14:58.162637] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cf2870 was disconnected and freed. delete nvme_qpair. 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.699 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.700 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.700 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.700 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.700 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.700 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.700 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.700 13:14:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.635 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.894 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.894 13:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.830 13:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:39.763 13:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.144 13:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.082 13:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.082 [2024-07-15 13:15:03.597633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:42.082 [2024-07-15 13:15:03.597702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.082 [2024-07-15 13:15:03.597724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.082 [2024-07-15 13:15:03.597743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.082 [2024-07-15 13:15:03.597758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.082 [2024-07-15 13:15:03.597774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.082 [2024-07-15 13:15:03.597789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.082 [2024-07-15 13:15:03.597805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.082 [2024-07-15 13:15:03.597820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.082 [2024-07-15 13:15:03.597835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.082 [2024-07-15 13:15:03.597851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.082 [2024-07-15 13:15:03.597885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb9300 is same with the state(5) to be set 00:22:42.082 [2024-07-15 13:15:03.607658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb9300 (9): Bad file descriptor 00:22:42.082 [2024-07-15 13:15:03.617710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.021 [2024-07-15 13:15:04.658953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:43.021 [2024-07-15 13:15:04.659032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb9300 with addr=10.0.0.2, port=4420 00:22:43.021 [2024-07-15 13:15:04.659054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb9300 is same with the state(5) to be set 00:22:43.021 [2024-07-15 13:15:04.659107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb9300 (9): Bad file descriptor 00:22:43.021 [2024-07-15 13:15:04.659578] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:43.021 [2024-07-15 13:15:04.659607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:43.021 [2024-07-15 13:15:04.659632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:43.021 [2024-07-15 13:15:04.659647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:43.021 [2024-07-15 13:15:04.659674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:43.021 [2024-07-15 13:15:04.659689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:43.021 13:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.399 [2024-07-15 13:15:05.662197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:44.399 [2024-07-15 13:15:05.662245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:44.399 [2024-07-15 13:15:05.662262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:44.399 [2024-07-15 13:15:05.662277] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:44.399 [2024-07-15 13:15:05.662303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:44.399 [2024-07-15 13:15:05.662346] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:44.399 [2024-07-15 13:15:05.662389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.399 [2024-07-15 13:15:05.662413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.399 [2024-07-15 13:15:05.662434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.399 [2024-07-15 13:15:05.662458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.399 [2024-07-15 13:15:05.662475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.399 [2024-07-15 13:15:05.662491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.399 [2024-07-15 13:15:05.662506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.399 [2024-07-15 13:15:05.662522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.399 [2024-07-15 13:15:05.662539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.399 [2024-07-15 13:15:05.662555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.399 [2024-07-15 13:15:05.662570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:44.399 [2024-07-15 13:15:05.662740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8780 (9): Bad file descriptor 00:22:44.399 [2024-07-15 13:15:05.663758] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:44.399 [2024-07-15 13:15:05.663785] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:44.399 13:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:45.335 13:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.299 [2024-07-15 13:15:07.720810] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:46.299 [2024-07-15 13:15:07.720839] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:46.299 [2024-07-15 13:15:07.720873] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:46.299 [2024-07-15 13:15:07.847339] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:46.299 13:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.299 [2024-07-15 13:15:07.911197] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:46.299 [2024-07-15 13:15:07.911243] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:46.299 [2024-07-15 13:15:07.911277] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:46.299 [2024-07-15 13:15:07.911298] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:46.299 [2024-07-15 13:15:07.911310] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:46.299 [2024-07-15 13:15:07.958607] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cc0110 was disconnected and freed. delete nvme_qpair. 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.234 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3909544 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3909544 ']' 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3909544 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3909544 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3909544' 00:22:47.492 killing process with pid 3909544 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3909544 00:22:47.492 13:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3909544 00:22:47.751 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:47.751 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.752 rmmod nvme_tcp 00:22:47.752 rmmod nvme_fabrics 00:22:47.752 rmmod nvme_keyring 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3909402 ']' 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3909402 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3909402 ']' 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3909402 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3909402 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3909402' 00:22:47.752 killing process with pid 3909402 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3909402 00:22:47.752 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3909402 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.010 13:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.920 13:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.920 00:22:49.920 real 0m18.355s 00:22:49.920 user 0m27.511s 00:22:49.920 sys 0m2.926s 00:22:49.920 13:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:49.920 13:15:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.920 ************************************ 00:22:49.920 END TEST nvmf_discovery_remove_ifc 00:22:49.920 ************************************ 00:22:50.178 13:15:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:50.178 13:15:11 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:50.178 13:15:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:50.178 13:15:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.178 13:15:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.178 ************************************ 00:22:50.178 START TEST nvmf_identify_kernel_target 00:22:50.178 ************************************ 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:50.178 * Looking for test storage... 00:22:50.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.178 13:15:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:52.115 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.115 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:52.116 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:52.116 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:52.116 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:22:52.116 00:22:52.116 --- 10.0.0.2 ping statistics --- 00:22:52.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.116 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:22:52.116 00:22:52.116 --- 10.0.0.1 ping statistics --- 00:22:52.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.116 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:52.116 13:15:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:53.054 Waiting for block devices as requested 00:22:53.312 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:53.312 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:53.570 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:53.570 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:53.570 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:53.570 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:53.830 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:53.830 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:53.830 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:53.830 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:54.089 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:54.089 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:54.089 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:54.089 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:54.348 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:54.348 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:54.348 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:54.606 No valid GPT data, bailing 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:54.606 00:22:54.606 Discovery Log Number of Records 2, Generation counter 2 00:22:54.606 =====Discovery Log Entry 0====== 00:22:54.606 trtype: tcp 00:22:54.606 adrfam: ipv4 00:22:54.606 subtype: current discovery subsystem 00:22:54.606 treq: not specified, sq flow control disable supported 00:22:54.606 portid: 1 00:22:54.606 trsvcid: 4420 00:22:54.606 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:54.606 traddr: 10.0.0.1 00:22:54.606 eflags: none 00:22:54.606 sectype: none 00:22:54.606 =====Discovery Log Entry 1====== 00:22:54.606 trtype: tcp 00:22:54.606 adrfam: ipv4 00:22:54.606 subtype: nvme subsystem 00:22:54.606 treq: not specified, sq flow control disable supported 00:22:54.606 portid: 1 00:22:54.606 trsvcid: 4420 00:22:54.606 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:54.606 traddr: 10.0.0.1 00:22:54.606 eflags: none 00:22:54.606 sectype: none 00:22:54.606 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:54.606 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:54.606 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.866 ===================================================== 00:22:54.866 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:54.866 ===================================================== 00:22:54.866 Controller Capabilities/Features 00:22:54.866 ================================ 00:22:54.866 Vendor ID: 0000 00:22:54.866 Subsystem Vendor ID: 0000 00:22:54.866 Serial Number: 252b26e910f2d6369a09 00:22:54.866 Model Number: Linux 00:22:54.866 Firmware Version: 6.7.0-68 00:22:54.866 Recommended Arb Burst: 0 00:22:54.866 IEEE OUI Identifier: 00 00 00 00:22:54.866 Multi-path I/O 00:22:54.866 May have multiple subsystem ports: No 00:22:54.866 May have multiple controllers: No 00:22:54.866 Associated with SR-IOV VF: No 00:22:54.866 Max Data Transfer Size: Unlimited 00:22:54.866 Max Number of Namespaces: 0 00:22:54.866 Max Number of I/O Queues: 1024 00:22:54.866 NVMe Specification Version (VS): 1.3 00:22:54.866 NVMe Specification Version (Identify): 1.3 00:22:54.866 Maximum Queue Entries: 1024 00:22:54.866 Contiguous Queues Required: No 00:22:54.866 Arbitration Mechanisms Supported 00:22:54.866 Weighted Round Robin: Not Supported 00:22:54.866 Vendor Specific: Not Supported 00:22:54.866 Reset Timeout: 7500 ms 00:22:54.866 Doorbell Stride: 4 bytes 00:22:54.866 NVM Subsystem Reset: Not Supported 00:22:54.866 Command Sets Supported 00:22:54.866 NVM Command Set: Supported 00:22:54.866 Boot Partition: Not Supported 00:22:54.866 Memory Page Size Minimum: 4096 bytes 00:22:54.866 Memory Page Size Maximum: 4096 bytes 00:22:54.866 Persistent Memory Region: Not Supported 00:22:54.866 Optional Asynchronous Events Supported 00:22:54.866 Namespace Attribute Notices: Not Supported 00:22:54.866 Firmware Activation Notices: Not Supported 00:22:54.867 ANA Change Notices: Not Supported 00:22:54.867 PLE Aggregate Log Change Notices: Not Supported 00:22:54.867 LBA Status Info Alert Notices: Not Supported 00:22:54.867 EGE Aggregate Log Change Notices: Not Supported 00:22:54.867 Normal NVM Subsystem Shutdown event: Not Supported 00:22:54.867 Zone Descriptor Change Notices: Not Supported 00:22:54.867 Discovery Log Change Notices: Supported 00:22:54.867 Controller Attributes 00:22:54.867 128-bit Host Identifier: Not Supported 00:22:54.867 Non-Operational Permissive Mode: Not Supported 00:22:54.867 NVM Sets: Not Supported 00:22:54.867 Read Recovery Levels: Not Supported 00:22:54.867 Endurance Groups: Not Supported 00:22:54.867 Predictable Latency Mode: Not Supported 00:22:54.867 Traffic Based Keep ALive: Not Supported 00:22:54.867 Namespace Granularity: Not Supported 00:22:54.867 SQ Associations: Not Supported 00:22:54.867 UUID List: Not Supported 00:22:54.867 Multi-Domain Subsystem: Not Supported 00:22:54.867 Fixed Capacity Management: Not Supported 00:22:54.867 Variable Capacity Management: Not Supported 00:22:54.867 Delete Endurance Group: Not Supported 00:22:54.867 Delete NVM Set: Not Supported 00:22:54.867 Extended LBA Formats Supported: Not Supported 00:22:54.867 Flexible Data Placement Supported: Not Supported 00:22:54.867 00:22:54.867 Controller Memory Buffer Support 00:22:54.867 ================================ 00:22:54.867 Supported: No 00:22:54.867 00:22:54.867 Persistent Memory Region Support 00:22:54.867 ================================ 00:22:54.867 Supported: No 00:22:54.867 00:22:54.867 Admin Command Set Attributes 00:22:54.867 ============================ 00:22:54.867 Security Send/Receive: Not Supported 00:22:54.867 Format NVM: Not Supported 00:22:54.867 Firmware Activate/Download: Not Supported 00:22:54.867 Namespace Management: Not Supported 00:22:54.867 Device Self-Test: Not Supported 00:22:54.867 Directives: Not Supported 00:22:54.867 NVMe-MI: Not Supported 00:22:54.867 Virtualization Management: Not Supported 00:22:54.867 Doorbell Buffer Config: Not Supported 00:22:54.867 Get LBA Status Capability: Not Supported 00:22:54.867 Command & Feature Lockdown Capability: Not Supported 00:22:54.867 Abort Command Limit: 1 00:22:54.867 Async Event Request Limit: 1 00:22:54.867 Number of Firmware Slots: N/A 00:22:54.867 Firmware Slot 1 Read-Only: N/A 00:22:54.867 Firmware Activation Without Reset: N/A 00:22:54.867 Multiple Update Detection Support: N/A 00:22:54.867 Firmware Update Granularity: No Information Provided 00:22:54.867 Per-Namespace SMART Log: No 00:22:54.867 Asymmetric Namespace Access Log Page: Not Supported 00:22:54.867 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:54.867 Command Effects Log Page: Not Supported 00:22:54.867 Get Log Page Extended Data: Supported 00:22:54.867 Telemetry Log Pages: Not Supported 00:22:54.867 Persistent Event Log Pages: Not Supported 00:22:54.867 Supported Log Pages Log Page: May Support 00:22:54.867 Commands Supported & Effects Log Page: Not Supported 00:22:54.867 Feature Identifiers & Effects Log Page:May Support 00:22:54.867 NVMe-MI Commands & Effects Log Page: May Support 00:22:54.867 Data Area 4 for Telemetry Log: Not Supported 00:22:54.867 Error Log Page Entries Supported: 1 00:22:54.867 Keep Alive: Not Supported 00:22:54.867 00:22:54.867 NVM Command Set Attributes 00:22:54.867 ========================== 00:22:54.867 Submission Queue Entry Size 00:22:54.867 Max: 1 00:22:54.867 Min: 1 00:22:54.867 Completion Queue Entry Size 00:22:54.867 Max: 1 00:22:54.867 Min: 1 00:22:54.867 Number of Namespaces: 0 00:22:54.867 Compare Command: Not Supported 00:22:54.867 Write Uncorrectable Command: Not Supported 00:22:54.867 Dataset Management Command: Not Supported 00:22:54.867 Write Zeroes Command: Not Supported 00:22:54.867 Set Features Save Field: Not Supported 00:22:54.867 Reservations: Not Supported 00:22:54.867 Timestamp: Not Supported 00:22:54.867 Copy: Not Supported 00:22:54.867 Volatile Write Cache: Not Present 00:22:54.867 Atomic Write Unit (Normal): 1 00:22:54.867 Atomic Write Unit (PFail): 1 00:22:54.867 Atomic Compare & Write Unit: 1 00:22:54.867 Fused Compare & Write: Not Supported 00:22:54.867 Scatter-Gather List 00:22:54.867 SGL Command Set: Supported 00:22:54.867 SGL Keyed: Not Supported 00:22:54.867 SGL Bit Bucket Descriptor: Not Supported 00:22:54.867 SGL Metadata Pointer: Not Supported 00:22:54.867 Oversized SGL: Not Supported 00:22:54.867 SGL Metadata Address: Not Supported 00:22:54.867 SGL Offset: Supported 00:22:54.867 Transport SGL Data Block: Not Supported 00:22:54.867 Replay Protected Memory Block: Not Supported 00:22:54.867 00:22:54.867 Firmware Slot Information 00:22:54.867 ========================= 00:22:54.867 Active slot: 0 00:22:54.867 00:22:54.867 00:22:54.867 Error Log 00:22:54.867 ========= 00:22:54.867 00:22:54.867 Active Namespaces 00:22:54.867 ================= 00:22:54.867 Discovery Log Page 00:22:54.867 ================== 00:22:54.867 Generation Counter: 2 00:22:54.867 Number of Records: 2 00:22:54.867 Record Format: 0 00:22:54.867 00:22:54.867 Discovery Log Entry 0 00:22:54.867 ---------------------- 00:22:54.867 Transport Type: 3 (TCP) 00:22:54.867 Address Family: 1 (IPv4) 00:22:54.867 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:54.867 Entry Flags: 00:22:54.867 Duplicate Returned Information: 0 00:22:54.867 Explicit Persistent Connection Support for Discovery: 0 00:22:54.867 Transport Requirements: 00:22:54.867 Secure Channel: Not Specified 00:22:54.867 Port ID: 1 (0x0001) 00:22:54.867 Controller ID: 65535 (0xffff) 00:22:54.867 Admin Max SQ Size: 32 00:22:54.867 Transport Service Identifier: 4420 00:22:54.867 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:54.867 Transport Address: 10.0.0.1 00:22:54.867 Discovery Log Entry 1 00:22:54.867 ---------------------- 00:22:54.867 Transport Type: 3 (TCP) 00:22:54.867 Address Family: 1 (IPv4) 00:22:54.867 Subsystem Type: 2 (NVM Subsystem) 00:22:54.867 Entry Flags: 00:22:54.867 Duplicate Returned Information: 0 00:22:54.867 Explicit Persistent Connection Support for Discovery: 0 00:22:54.867 Transport Requirements: 00:22:54.867 Secure Channel: Not Specified 00:22:54.867 Port ID: 1 (0x0001) 00:22:54.867 Controller ID: 65535 (0xffff) 00:22:54.867 Admin Max SQ Size: 32 00:22:54.867 Transport Service Identifier: 4420 00:22:54.867 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:54.867 Transport Address: 10.0.0.1 00:22:54.867 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:54.867 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.867 get_feature(0x01) failed 00:22:54.867 get_feature(0x02) failed 00:22:54.867 get_feature(0x04) failed 00:22:54.867 ===================================================== 00:22:54.867 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:54.867 ===================================================== 00:22:54.867 Controller Capabilities/Features 00:22:54.867 ================================ 00:22:54.867 Vendor ID: 0000 00:22:54.867 Subsystem Vendor ID: 0000 00:22:54.867 Serial Number: da9cd272c40a4b8417bb 00:22:54.867 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:54.867 Firmware Version: 6.7.0-68 00:22:54.867 Recommended Arb Burst: 6 00:22:54.867 IEEE OUI Identifier: 00 00 00 00:22:54.867 Multi-path I/O 00:22:54.867 May have multiple subsystem ports: Yes 00:22:54.867 May have multiple controllers: Yes 00:22:54.867 Associated with SR-IOV VF: No 00:22:54.867 Max Data Transfer Size: Unlimited 00:22:54.867 Max Number of Namespaces: 1024 00:22:54.867 Max Number of I/O Queues: 128 00:22:54.867 NVMe Specification Version (VS): 1.3 00:22:54.867 NVMe Specification Version (Identify): 1.3 00:22:54.867 Maximum Queue Entries: 1024 00:22:54.867 Contiguous Queues Required: No 00:22:54.867 Arbitration Mechanisms Supported 00:22:54.867 Weighted Round Robin: Not Supported 00:22:54.867 Vendor Specific: Not Supported 00:22:54.867 Reset Timeout: 7500 ms 00:22:54.867 Doorbell Stride: 4 bytes 00:22:54.867 NVM Subsystem Reset: Not Supported 00:22:54.867 Command Sets Supported 00:22:54.867 NVM Command Set: Supported 00:22:54.867 Boot Partition: Not Supported 00:22:54.867 Memory Page Size Minimum: 4096 bytes 00:22:54.867 Memory Page Size Maximum: 4096 bytes 00:22:54.867 Persistent Memory Region: Not Supported 00:22:54.867 Optional Asynchronous Events Supported 00:22:54.867 Namespace Attribute Notices: Supported 00:22:54.867 Firmware Activation Notices: Not Supported 00:22:54.867 ANA Change Notices: Supported 00:22:54.867 PLE Aggregate Log Change Notices: Not Supported 00:22:54.867 LBA Status Info Alert Notices: Not Supported 00:22:54.867 EGE Aggregate Log Change Notices: Not Supported 00:22:54.867 Normal NVM Subsystem Shutdown event: Not Supported 00:22:54.867 Zone Descriptor Change Notices: Not Supported 00:22:54.867 Discovery Log Change Notices: Not Supported 00:22:54.867 Controller Attributes 00:22:54.867 128-bit Host Identifier: Supported 00:22:54.867 Non-Operational Permissive Mode: Not Supported 00:22:54.868 NVM Sets: Not Supported 00:22:54.868 Read Recovery Levels: Not Supported 00:22:54.868 Endurance Groups: Not Supported 00:22:54.868 Predictable Latency Mode: Not Supported 00:22:54.868 Traffic Based Keep ALive: Supported 00:22:54.868 Namespace Granularity: Not Supported 00:22:54.868 SQ Associations: Not Supported 00:22:54.868 UUID List: Not Supported 00:22:54.868 Multi-Domain Subsystem: Not Supported 00:22:54.868 Fixed Capacity Management: Not Supported 00:22:54.868 Variable Capacity Management: Not Supported 00:22:54.868 Delete Endurance Group: Not Supported 00:22:54.868 Delete NVM Set: Not Supported 00:22:54.868 Extended LBA Formats Supported: Not Supported 00:22:54.868 Flexible Data Placement Supported: Not Supported 00:22:54.868 00:22:54.868 Controller Memory Buffer Support 00:22:54.868 ================================ 00:22:54.868 Supported: No 00:22:54.868 00:22:54.868 Persistent Memory Region Support 00:22:54.868 ================================ 00:22:54.868 Supported: No 00:22:54.868 00:22:54.868 Admin Command Set Attributes 00:22:54.868 ============================ 00:22:54.868 Security Send/Receive: Not Supported 00:22:54.868 Format NVM: Not Supported 00:22:54.868 Firmware Activate/Download: Not Supported 00:22:54.868 Namespace Management: Not Supported 00:22:54.868 Device Self-Test: Not Supported 00:22:54.868 Directives: Not Supported 00:22:54.868 NVMe-MI: Not Supported 00:22:54.868 Virtualization Management: Not Supported 00:22:54.868 Doorbell Buffer Config: Not Supported 00:22:54.868 Get LBA Status Capability: Not Supported 00:22:54.868 Command & Feature Lockdown Capability: Not Supported 00:22:54.868 Abort Command Limit: 4 00:22:54.868 Async Event Request Limit: 4 00:22:54.868 Number of Firmware Slots: N/A 00:22:54.868 Firmware Slot 1 Read-Only: N/A 00:22:54.868 Firmware Activation Without Reset: N/A 00:22:54.868 Multiple Update Detection Support: N/A 00:22:54.868 Firmware Update Granularity: No Information Provided 00:22:54.868 Per-Namespace SMART Log: Yes 00:22:54.868 Asymmetric Namespace Access Log Page: Supported 00:22:54.868 ANA Transition Time : 10 sec 00:22:54.868 00:22:54.868 Asymmetric Namespace Access Capabilities 00:22:54.868 ANA Optimized State : Supported 00:22:54.868 ANA Non-Optimized State : Supported 00:22:54.868 ANA Inaccessible State : Supported 00:22:54.868 ANA Persistent Loss State : Supported 00:22:54.868 ANA Change State : Supported 00:22:54.868 ANAGRPID is not changed : No 00:22:54.868 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:54.868 00:22:54.868 ANA Group Identifier Maximum : 128 00:22:54.868 Number of ANA Group Identifiers : 128 00:22:54.868 Max Number of Allowed Namespaces : 1024 00:22:54.868 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:54.868 Command Effects Log Page: Supported 00:22:54.868 Get Log Page Extended Data: Supported 00:22:54.868 Telemetry Log Pages: Not Supported 00:22:54.868 Persistent Event Log Pages: Not Supported 00:22:54.868 Supported Log Pages Log Page: May Support 00:22:54.868 Commands Supported & Effects Log Page: Not Supported 00:22:54.868 Feature Identifiers & Effects Log Page:May Support 00:22:54.868 NVMe-MI Commands & Effects Log Page: May Support 00:22:54.868 Data Area 4 for Telemetry Log: Not Supported 00:22:54.868 Error Log Page Entries Supported: 128 00:22:54.868 Keep Alive: Supported 00:22:54.868 Keep Alive Granularity: 1000 ms 00:22:54.868 00:22:54.868 NVM Command Set Attributes 00:22:54.868 ========================== 00:22:54.868 Submission Queue Entry Size 00:22:54.868 Max: 64 00:22:54.868 Min: 64 00:22:54.868 Completion Queue Entry Size 00:22:54.868 Max: 16 00:22:54.868 Min: 16 00:22:54.868 Number of Namespaces: 1024 00:22:54.868 Compare Command: Not Supported 00:22:54.868 Write Uncorrectable Command: Not Supported 00:22:54.868 Dataset Management Command: Supported 00:22:54.868 Write Zeroes Command: Supported 00:22:54.868 Set Features Save Field: Not Supported 00:22:54.868 Reservations: Not Supported 00:22:54.868 Timestamp: Not Supported 00:22:54.868 Copy: Not Supported 00:22:54.868 Volatile Write Cache: Present 00:22:54.868 Atomic Write Unit (Normal): 1 00:22:54.868 Atomic Write Unit (PFail): 1 00:22:54.868 Atomic Compare & Write Unit: 1 00:22:54.868 Fused Compare & Write: Not Supported 00:22:54.868 Scatter-Gather List 00:22:54.868 SGL Command Set: Supported 00:22:54.868 SGL Keyed: Not Supported 00:22:54.868 SGL Bit Bucket Descriptor: Not Supported 00:22:54.868 SGL Metadata Pointer: Not Supported 00:22:54.868 Oversized SGL: Not Supported 00:22:54.868 SGL Metadata Address: Not Supported 00:22:54.868 SGL Offset: Supported 00:22:54.868 Transport SGL Data Block: Not Supported 00:22:54.868 Replay Protected Memory Block: Not Supported 00:22:54.868 00:22:54.868 Firmware Slot Information 00:22:54.868 ========================= 00:22:54.868 Active slot: 0 00:22:54.868 00:22:54.868 Asymmetric Namespace Access 00:22:54.868 =========================== 00:22:54.868 Change Count : 0 00:22:54.868 Number of ANA Group Descriptors : 1 00:22:54.868 ANA Group Descriptor : 0 00:22:54.868 ANA Group ID : 1 00:22:54.868 Number of NSID Values : 1 00:22:54.868 Change Count : 0 00:22:54.868 ANA State : 1 00:22:54.868 Namespace Identifier : 1 00:22:54.868 00:22:54.868 Commands Supported and Effects 00:22:54.868 ============================== 00:22:54.868 Admin Commands 00:22:54.868 -------------- 00:22:54.868 Get Log Page (02h): Supported 00:22:54.868 Identify (06h): Supported 00:22:54.868 Abort (08h): Supported 00:22:54.868 Set Features (09h): Supported 00:22:54.868 Get Features (0Ah): Supported 00:22:54.868 Asynchronous Event Request (0Ch): Supported 00:22:54.868 Keep Alive (18h): Supported 00:22:54.868 I/O Commands 00:22:54.868 ------------ 00:22:54.868 Flush (00h): Supported 00:22:54.868 Write (01h): Supported LBA-Change 00:22:54.868 Read (02h): Supported 00:22:54.868 Write Zeroes (08h): Supported LBA-Change 00:22:54.868 Dataset Management (09h): Supported 00:22:54.868 00:22:54.868 Error Log 00:22:54.868 ========= 00:22:54.868 Entry: 0 00:22:54.868 Error Count: 0x3 00:22:54.868 Submission Queue Id: 0x0 00:22:54.868 Command Id: 0x5 00:22:54.868 Phase Bit: 0 00:22:54.868 Status Code: 0x2 00:22:54.868 Status Code Type: 0x0 00:22:54.868 Do Not Retry: 1 00:22:54.868 Error Location: 0x28 00:22:54.868 LBA: 0x0 00:22:54.868 Namespace: 0x0 00:22:54.868 Vendor Log Page: 0x0 00:22:54.868 ----------- 00:22:54.868 Entry: 1 00:22:54.868 Error Count: 0x2 00:22:54.868 Submission Queue Id: 0x0 00:22:54.868 Command Id: 0x5 00:22:54.868 Phase Bit: 0 00:22:54.868 Status Code: 0x2 00:22:54.868 Status Code Type: 0x0 00:22:54.868 Do Not Retry: 1 00:22:54.868 Error Location: 0x28 00:22:54.868 LBA: 0x0 00:22:54.868 Namespace: 0x0 00:22:54.868 Vendor Log Page: 0x0 00:22:54.868 ----------- 00:22:54.868 Entry: 2 00:22:54.868 Error Count: 0x1 00:22:54.868 Submission Queue Id: 0x0 00:22:54.868 Command Id: 0x4 00:22:54.868 Phase Bit: 0 00:22:54.868 Status Code: 0x2 00:22:54.868 Status Code Type: 0x0 00:22:54.868 Do Not Retry: 1 00:22:54.868 Error Location: 0x28 00:22:54.868 LBA: 0x0 00:22:54.868 Namespace: 0x0 00:22:54.868 Vendor Log Page: 0x0 00:22:54.868 00:22:54.868 Number of Queues 00:22:54.868 ================ 00:22:54.868 Number of I/O Submission Queues: 128 00:22:54.868 Number of I/O Completion Queues: 128 00:22:54.868 00:22:54.868 ZNS Specific Controller Data 00:22:54.868 ============================ 00:22:54.868 Zone Append Size Limit: 0 00:22:54.868 00:22:54.868 00:22:54.868 Active Namespaces 00:22:54.868 ================= 00:22:54.868 get_feature(0x05) failed 00:22:54.868 Namespace ID:1 00:22:54.868 Command Set Identifier: NVM (00h) 00:22:54.868 Deallocate: Supported 00:22:54.868 Deallocated/Unwritten Error: Not Supported 00:22:54.868 Deallocated Read Value: Unknown 00:22:54.868 Deallocate in Write Zeroes: Not Supported 00:22:54.868 Deallocated Guard Field: 0xFFFF 00:22:54.868 Flush: Supported 00:22:54.868 Reservation: Not Supported 00:22:54.868 Namespace Sharing Capabilities: Multiple Controllers 00:22:54.868 Size (in LBAs): 1953525168 (931GiB) 00:22:54.868 Capacity (in LBAs): 1953525168 (931GiB) 00:22:54.868 Utilization (in LBAs): 1953525168 (931GiB) 00:22:54.868 UUID: bc814a9d-0993-4d8e-b3cc-f8931d13adfa 00:22:54.868 Thin Provisioning: Not Supported 00:22:54.868 Per-NS Atomic Units: Yes 00:22:54.868 Atomic Boundary Size (Normal): 0 00:22:54.868 Atomic Boundary Size (PFail): 0 00:22:54.868 Atomic Boundary Offset: 0 00:22:54.868 NGUID/EUI64 Never Reused: No 00:22:54.868 ANA group ID: 1 00:22:54.868 Namespace Write Protected: No 00:22:54.868 Number of LBA Formats: 1 00:22:54.868 Current LBA Format: LBA Format #00 00:22:54.868 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:54.868 00:22:54.868 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:54.868 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.868 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.869 rmmod nvme_tcp 00:22:54.869 rmmod nvme_fabrics 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.869 13:15:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:57.406 13:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:58.340 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:58.340 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:58.340 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:58.340 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:58.340 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:58.340 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:58.340 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:58.340 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:58.340 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:59.273 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:59.530 00:22:59.530 real 0m9.382s 00:22:59.530 user 0m1.961s 00:22:59.530 sys 0m3.356s 00:22:59.530 13:15:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.530 13:15:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.530 ************************************ 00:22:59.530 END TEST nvmf_identify_kernel_target 00:22:59.530 ************************************ 00:22:59.530 13:15:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:59.530 13:15:21 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:59.530 13:15:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:59.530 13:15:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.530 13:15:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.530 ************************************ 00:22:59.530 START TEST nvmf_auth_host 00:22:59.530 ************************************ 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:59.530 * Looking for test storage... 00:22:59.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.530 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.531 13:15:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.059 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:02.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:02.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:02.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:02.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:02.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:23:02.060 00:23:02.060 --- 10.0.0.2 ping statistics --- 00:23:02.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.060 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:23:02.060 00:23:02.060 --- 10.0.0.1 ping statistics --- 00:23:02.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.060 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3917377 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3917377 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3917377 ']' 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.060 13:15:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c8c9d84dcc44c02092f2ee8d4ca26f9 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Oq9 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c8c9d84dcc44c02092f2ee8d4ca26f9 0 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c8c9d84dcc44c02092f2ee8d4ca26f9 0 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c8c9d84dcc44c02092f2ee8d4ca26f9 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Oq9 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Oq9 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Oq9 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=34049033d3cea49b531737645a68c9df0d773dc3187505237fe877087b707fc6 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iIE 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 34049033d3cea49b531737645a68c9df0d773dc3187505237fe877087b707fc6 3 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 34049033d3cea49b531737645a68c9df0d773dc3187505237fe877087b707fc6 3 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=34049033d3cea49b531737645a68c9df0d773dc3187505237fe877087b707fc6 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iIE 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iIE 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.iIE 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=19a911c6ee6da1615bac347d0ffbce508985d9724f2e779c 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nAi 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 19a911c6ee6da1615bac347d0ffbce508985d9724f2e779c 0 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 19a911c6ee6da1615bac347d0ffbce508985d9724f2e779c 0 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=19a911c6ee6da1615bac347d0ffbce508985d9724f2e779c 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nAi 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nAi 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nAi 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e5845ce53a6682354430f26179bdc4417a9cc4f1106b11cb 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jR7 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e5845ce53a6682354430f26179bdc4417a9cc4f1106b11cb 2 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e5845ce53a6682354430f26179bdc4417a9cc4f1106b11cb 2 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e5845ce53a6682354430f26179bdc4417a9cc4f1106b11cb 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jR7 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jR7 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jR7 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=addcc9adee04b1973515f8ba3bc2d71d 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Nto 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key addcc9adee04b1973515f8ba3bc2d71d 1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 addcc9adee04b1973515f8ba3bc2d71d 1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=addcc9adee04b1973515f8ba3bc2d71d 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:02.991 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Nto 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Nto 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Nto 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7d3ba84a02ae5716ba142d35a324dc18 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:03.301 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6v1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7d3ba84a02ae5716ba142d35a324dc18 1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7d3ba84a02ae5716ba142d35a324dc18 1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7d3ba84a02ae5716ba142d35a324dc18 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6v1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6v1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6v1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d8301c4efb8b33a81ced8b442e840fbac60eee87072d7d7 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2KJ 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d8301c4efb8b33a81ced8b442e840fbac60eee87072d7d7 2 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d8301c4efb8b33a81ced8b442e840fbac60eee87072d7d7 2 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d8301c4efb8b33a81ced8b442e840fbac60eee87072d7d7 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2KJ 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2KJ 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2KJ 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1e47f8cea56ea7ee30d2a1b1d806411b 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fP0 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1e47f8cea56ea7ee30d2a1b1d806411b 0 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1e47f8cea56ea7ee30d2a1b1d806411b 0 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1e47f8cea56ea7ee30d2a1b1d806411b 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fP0 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fP0 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fP0 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0d2252c8d595a4bab3f505bee2c71e9fb305658992e11e38764b31113d242e4 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tRu 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0d2252c8d595a4bab3f505bee2c71e9fb305658992e11e38764b31113d242e4 3 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0d2252c8d595a4bab3f505bee2c71e9fb305658992e11e38764b31113d242e4 3 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0d2252c8d595a4bab3f505bee2c71e9fb305658992e11e38764b31113d242e4 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tRu 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tRu 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tRu 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3917377 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3917377 ']' 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.302 13:15:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Oq9 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.iIE ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iIE 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nAi 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jR7 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jR7 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Nto 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6v1 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6v1 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2KJ 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fP0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fP0 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.559 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tRu 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:03.560 13:15:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:04.933 Waiting for block devices as requested 00:23:04.933 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:04.933 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:04.933 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:05.191 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:05.191 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:05.191 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:05.450 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:05.450 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:05.450 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:05.450 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:05.706 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:05.706 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:05.706 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:05.962 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:05.962 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:05.962 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:05.962 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:06.528 No valid GPT data, bailing 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:06.528 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:06.786 00:23:06.786 Discovery Log Number of Records 2, Generation counter 2 00:23:06.786 =====Discovery Log Entry 0====== 00:23:06.786 trtype: tcp 00:23:06.786 adrfam: ipv4 00:23:06.786 subtype: current discovery subsystem 00:23:06.786 treq: not specified, sq flow control disable supported 00:23:06.786 portid: 1 00:23:06.786 trsvcid: 4420 00:23:06.786 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:06.786 traddr: 10.0.0.1 00:23:06.786 eflags: none 00:23:06.786 sectype: none 00:23:06.786 =====Discovery Log Entry 1====== 00:23:06.786 trtype: tcp 00:23:06.786 adrfam: ipv4 00:23:06.786 subtype: nvme subsystem 00:23:06.786 treq: not specified, sq flow control disable supported 00:23:06.786 portid: 1 00:23:06.786 trsvcid: 4420 00:23:06.786 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:06.786 traddr: 10.0.0.1 00:23:06.786 eflags: none 00:23:06.786 sectype: none 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.786 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.787 nvme0n1 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.787 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.045 nvme0n1 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.045 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.046 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.046 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.046 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.305 nvme0n1 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.305 13:15:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.611 nvme0n1 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:07.611 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.612 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.880 nvme0n1 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.880 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.881 nvme0n1 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.881 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.137 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.138 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.395 nvme0n1 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.395 13:15:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.653 nvme0n1 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.653 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.911 nvme0n1 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.911 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.169 nvme0n1 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.169 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.170 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.427 nvme0n1 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.427 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.428 13:15:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.685 nvme0n1 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.685 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.686 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.944 nvme0n1 00:23:09.944 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.944 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.944 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.944 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.944 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.944 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.202 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.461 nvme0n1 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.462 13:15:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.462 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.721 nvme0n1 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.721 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.980 nvme0n1 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.980 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.238 13:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.803 nvme0n1 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:11.803 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.804 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 nvme0n1 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.369 13:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.934 nvme0n1 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.934 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.935 13:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.501 nvme0n1 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.501 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.068 nvme0n1 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.068 13:15:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.003 nvme0n1 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.003 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.261 13:15:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.194 nvme0n1 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.195 13:15:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.129 nvme0n1 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.129 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.130 13:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.064 nvme0n1 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.064 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.323 13:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.263 nvme0n1 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.263 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.264 nvme0n1 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.264 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:19.522 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.523 13:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.523 nvme0n1 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.523 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.780 nvme0n1 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.780 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.781 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.040 nvme0n1 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.040 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.300 nvme0n1 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.300 13:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.558 nvme0n1 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.558 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.559 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.559 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.559 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.818 nvme0n1 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.818 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.107 nvme0n1 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.107 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.366 nvme0n1 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.366 13:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.624 nvme0n1 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.624 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.896 nvme0n1 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.896 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.153 nvme0n1 00:23:22.153 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.153 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.153 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.153 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.153 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.153 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.409 13:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.668 nvme0n1 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.668 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.929 nvme0n1 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:22.929 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.930 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.189 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.450 nvme0n1 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.450 13:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.022 nvme0n1 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.022 13:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.593 nvme0n1 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.593 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.594 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.163 nvme0n1 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.163 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.164 13:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.735 nvme0n1 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.735 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.302 nvme0n1 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.302 13:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 nvme0n1 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.237 13:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.608 nvme0n1 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:28.608 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.609 13:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 nvme0n1 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.548 13:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.549 13:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.549 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.549 13:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 nvme0n1 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.486 13:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 nvme0n1 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.419 13:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.419 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.420 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.691 nvme0n1 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.691 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.949 nvme0n1 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:31.949 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.950 nvme0n1 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.950 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:32.208 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.209 nvme0n1 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:32.209 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.468 13:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.468 nvme0n1 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.468 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.727 nvme0n1 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.727 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.728 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.988 nvme0n1 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.988 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.249 nvme0n1 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.249 13:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.508 nvme0n1 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.508 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 nvme0n1 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.769 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.364 nvme0n1 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.364 13:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.365 13:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.365 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.365 13:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.624 nvme0n1 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.624 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.625 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.886 nvme0n1 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.886 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.145 nvme0n1 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.145 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.405 13:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.665 nvme0n1 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.665 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.666 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.666 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.666 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.666 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.666 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.666 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.666 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.233 nvme0n1 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.233 13:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.813 nvme0n1 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.813 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.814 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.382 nvme0n1 00:23:37.382 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.382 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.382 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.382 13:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.382 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.382 13:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.382 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.951 nvme0n1 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.951 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.210 13:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.777 nvme0n1 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM4YzlkODRkY2M0NGMwMjA5MmYyZWU4ZDRjYTI2ZjkSKhkg: 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQwNDkwMzNkM2NlYTQ5YjUzMTczNzY0NWE2OGM5ZGYwZDc3M2RjMzE4NzUwNTIzN2ZlODc3MDg3YjcwN2ZjNofNWj4=: 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.777 13:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.711 nvme0n1 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.711 13:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.652 nvme0n1 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWRkY2M5YWRlZTA0YjE5NzM1MTVmOGJhM2JjMmQ3MWT0dKmf: 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: ]] 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2QzYmE4NGEwMmFlNTcxNmJhMTQyZDM1YTMyNGRjMTgSISBm: 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.652 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.911 13:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.850 nvme0n1 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ4MzAxYzRlZmI4YjMzYTgxY2VkOGI0NDJlODQwZmJhYzYwZWVlODcwNzJkN2Q3Y08BTg==: 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU0N2Y4Y2VhNTZlYTdlZTMwZDJhMWIxZDgwNjQxMWJprprl: 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.850 13:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.783 nvme0n1 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBkMjI1MmM4ZDU5NWE0YmFiM2Y1MDViZWUyYzcxZTlmYjMwNTY1ODk5MmUxMWUzODc2NGIzMTExM2QyNDJlNKcGytk=: 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.783 13:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.720 nvme0n1 00:23:43.720 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.720 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.720 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.720 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.720 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.720 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlhOTExYzZlZTZkYTE2MTViYWMzNDdkMGZmYmNlNTA4OTg1ZDk3MjRmMmU3Nzlj7EoiNw==: 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDVjZTUzYTY2ODIzNTQ0MzBmMjYxNzliZGM0NDE3YTljYzRmMTEwNmIxMWNi62BdpA==: 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.978 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.978 request: 00:23:43.978 { 00:23:43.978 "name": "nvme0", 00:23:43.978 "trtype": "tcp", 00:23:43.978 "traddr": "10.0.0.1", 00:23:43.978 "adrfam": "ipv4", 00:23:43.978 "trsvcid": "4420", 00:23:43.978 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:43.978 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:43.979 "prchk_reftag": false, 00:23:43.979 "prchk_guard": false, 00:23:43.979 "hdgst": false, 00:23:43.979 "ddgst": false, 00:23:43.979 "method": "bdev_nvme_attach_controller", 00:23:43.979 "req_id": 1 00:23:43.979 } 00:23:43.979 Got JSON-RPC error response 00:23:43.979 response: 00:23:43.979 { 00:23:43.979 "code": -5, 00:23:43.979 "message": "Input/output error" 00:23:43.979 } 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.979 request: 00:23:43.979 { 00:23:43.979 "name": "nvme0", 00:23:43.979 "trtype": "tcp", 00:23:43.979 "traddr": "10.0.0.1", 00:23:43.979 "adrfam": "ipv4", 00:23:43.979 "trsvcid": "4420", 00:23:43.979 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:43.979 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:43.979 "prchk_reftag": false, 00:23:43.979 "prchk_guard": false, 00:23:43.979 "hdgst": false, 00:23:43.979 "ddgst": false, 00:23:43.979 "dhchap_key": "key2", 00:23:43.979 "method": "bdev_nvme_attach_controller", 00:23:43.979 "req_id": 1 00:23:43.979 } 00:23:43.979 Got JSON-RPC error response 00:23:43.979 response: 00:23:43.979 { 00:23:43.979 "code": -5, 00:23:43.979 "message": "Input/output error" 00:23:43.979 } 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:43.979 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.236 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 request: 00:23:44.237 { 00:23:44.237 "name": "nvme0", 00:23:44.237 "trtype": "tcp", 00:23:44.237 "traddr": "10.0.0.1", 00:23:44.237 "adrfam": "ipv4", 00:23:44.237 "trsvcid": "4420", 00:23:44.237 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:44.237 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:44.237 "prchk_reftag": false, 00:23:44.237 "prchk_guard": false, 00:23:44.237 "hdgst": false, 00:23:44.237 "ddgst": false, 00:23:44.237 "dhchap_key": "key1", 00:23:44.237 "dhchap_ctrlr_key": "ckey2", 00:23:44.237 "method": "bdev_nvme_attach_controller", 00:23:44.237 "req_id": 1 00:23:44.237 } 00:23:44.237 Got JSON-RPC error response 00:23:44.237 response: 00:23:44.237 { 00:23:44.237 "code": -5, 00:23:44.237 "message": "Input/output error" 00:23:44.237 } 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.237 rmmod nvme_tcp 00:23:44.237 rmmod nvme_fabrics 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3917377 ']' 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3917377 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3917377 ']' 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3917377 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917377 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917377' 00:23:44.237 killing process with pid 3917377 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3917377 00:23:44.237 13:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3917377 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.494 13:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.030 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.030 13:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:47.030 13:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:47.030 13:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:47.030 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:47.031 13:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:47.967 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:47.967 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:47.967 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:47.967 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:47.967 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:47.967 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:47.967 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:47.967 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:47.967 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:48.931 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:48.931 13:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Oq9 /tmp/spdk.key-null.nAi /tmp/spdk.key-sha256.Nto /tmp/spdk.key-sha384.2KJ /tmp/spdk.key-sha512.tRu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:48.931 13:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:49.889 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:49.889 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:49.889 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:49.889 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:49.889 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:49.889 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:49.889 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:49.889 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:49.889 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:49.889 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:49.889 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:49.889 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:49.889 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:49.889 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:49.889 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:49.889 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:49.889 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:50.146 00:23:50.146 real 0m50.628s 00:23:50.146 user 0m48.627s 00:23:50.146 sys 0m5.908s 00:23:50.146 13:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.146 13:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.146 ************************************ 00:23:50.146 END TEST nvmf_auth_host 00:23:50.146 ************************************ 00:23:50.146 13:16:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.146 13:16:11 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:50.146 13:16:11 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:50.146 13:16:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.146 13:16:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.146 13:16:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.146 ************************************ 00:23:50.146 START TEST nvmf_digest 00:23:50.146 ************************************ 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:50.146 * Looking for test storage... 00:23:50.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.146 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.147 13:16:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:52.683 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:52.683 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:52.683 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:52.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.683 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:23:52.684 00:23:52.684 --- 10.0.0.2 ping statistics --- 00:23:52.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.684 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:23:52.684 00:23:52.684 --- 10.0.0.1 ping statistics --- 00:23:52.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.684 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:52.684 ************************************ 00:23:52.684 START TEST nvmf_digest_clean 00:23:52.684 ************************************ 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3926963 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3926963 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3926963 ']' 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.684 13:16:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.684 [2024-07-15 13:16:14.020723] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:23:52.684 [2024-07-15 13:16:14.020810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.684 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.684 [2024-07-15 13:16:14.095399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.684 [2024-07-15 13:16:14.207509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.684 [2024-07-15 13:16:14.207567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.684 [2024-07-15 13:16:14.207597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.684 [2024-07-15 13:16:14.207607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.684 [2024-07-15 13:16:14.207617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.684 [2024-07-15 13:16:14.207646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.619 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:53.619 null0 00:23:53.619 [2024-07-15 13:16:15.158885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.619 [2024-07-15 13:16:15.183110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3927121 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3927121 /var/tmp/bperf.sock 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3927121 ']' 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:53.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.620 13:16:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 [2024-07-15 13:16:15.233655] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:23:53.620 [2024-07-15 13:16:15.233735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927121 ] 00:23:53.620 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.620 [2024-07-15 13:16:15.302650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.877 [2024-07-15 13:16:15.424028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.809 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.809 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:54.809 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:54.809 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:54.809 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:55.068 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:55.068 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:55.327 nvme0n1 00:23:55.327 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:55.327 13:16:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:55.327 Running I/O for 2 seconds... 00:23:57.858 00:23:57.858 Latency(us) 00:23:57.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.858 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:57.858 nvme0n1 : 2.01 13302.49 51.96 0.00 0.00 9606.67 4684.61 25631.86 00:23:57.858 =================================================================================================================== 00:23:57.858 Total : 13302.49 51.96 0.00 0.00 9606.67 4684.61 25631.86 00:23:57.858 { 00:23:57.859 "core_count": 1, 00:23:57.859 "test_results": [ 00:23:57.859 { 00:23:57.859 "job": "nvme0n1", 00:23:57.859 "test_status": "finished", 00:23:57.859 "core_mask": "0x2", 00:23:57.859 "workload": "randread", 00:23:57.859 "queue_depth": 128, 00:23:57.859 "io_size": 4096, 00:23:57.859 "runtime": 2.0063910484313965, 00:23:57.859 "io_per_second": 13302.491887174534, 00:23:57.859 "MiB_per_second": 51.96285893427552, 00:23:57.859 "fails_per_second": 0.0, 00:23:57.859 "timeout_per_second": 0.0, 00:23:57.859 "average_latency_us": 9606.666081845053, 00:23:57.859 "min_latency_us": 4684.61037037037, 00:23:57.859 "max_latency_us": 25631.85777777778 00:23:57.859 } 00:23:57.859 ] 00:23:57.859 } 00:23:57.859 13:16:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:57.859 13:16:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:57.859 13:16:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:57.859 13:16:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:57.859 13:16:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:57.859 | select(.opcode=="crc32c") 00:23:57.859 | "\(.module_name) \(.executed)"' 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3927121 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3927121 ']' 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3927121 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927121 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927121' 00:23:57.859 killing process with pid 3927121 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3927121 00:23:57.859 Received shutdown signal, test time was about 2.000000 seconds 00:23:57.859 00:23:57.859 Latency(us) 00:23:57.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.859 =================================================================================================================== 00:23:57.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3927121 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3927607 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3927607 /var/tmp/bperf.sock 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3927607 ']' 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:57.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.859 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:58.117 [2024-07-15 13:16:19.575540] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:23:58.117 [2024-07-15 13:16:19.575631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927607 ] 00:23:58.117 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:58.117 Zero copy mechanism will not be used. 00:23:58.117 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.117 [2024-07-15 13:16:19.641441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.117 [2024-07-15 13:16:19.759135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.117 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.117 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:58.117 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:58.117 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:58.117 13:16:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:58.683 13:16:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.683 13:16:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:59.248 nvme0n1 00:23:59.248 13:16:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:59.248 13:16:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:59.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:59.248 Zero copy mechanism will not be used. 00:23:59.248 Running I/O for 2 seconds... 00:24:01.146 00:24:01.146 Latency(us) 00:24:01.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.146 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:01.146 nvme0n1 : 2.00 3164.08 395.51 0.00 0.00 5052.33 4781.70 12815.93 00:24:01.146 =================================================================================================================== 00:24:01.147 Total : 3164.08 395.51 0.00 0.00 5052.33 4781.70 12815.93 00:24:01.147 { 00:24:01.147 "core_count": 1, 00:24:01.147 "test_results": [ 00:24:01.147 { 00:24:01.147 "job": "nvme0n1", 00:24:01.147 "test_status": "finished", 00:24:01.147 "core_mask": "0x2", 00:24:01.147 "workload": "randread", 00:24:01.147 "queue_depth": 16, 00:24:01.147 "io_size": 131072, 00:24:01.147 "runtime": 2.0024800300598145, 00:24:01.147 "io_per_second": 3164.076545084096, 00:24:01.147 "MiB_per_second": 395.509568135512, 00:24:01.147 "fails_per_second": 0.0, 00:24:01.147 "timeout_per_second": 0.0, 00:24:01.147 "average_latency_us": 5052.334784885897, 00:24:01.147 "min_latency_us": 4781.700740740741, 00:24:01.147 "max_latency_us": 12815.92888888889 00:24:01.147 } 00:24:01.147 ] 00:24:01.147 } 00:24:01.147 13:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:01.147 13:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:01.147 13:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:01.147 13:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:01.147 13:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:01.147 | select(.opcode=="crc32c") 00:24:01.147 | "\(.module_name) \(.executed)"' 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3927607 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3927607 ']' 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3927607 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.404 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927607 00:24:01.662 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:01.662 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:01.662 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927607' 00:24:01.662 killing process with pid 3927607 00:24:01.662 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3927607 00:24:01.662 Received shutdown signal, test time was about 2.000000 seconds 00:24:01.662 00:24:01.662 Latency(us) 00:24:01.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.662 =================================================================================================================== 00:24:01.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.662 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3927607 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3928069 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3928069 /var/tmp/bperf.sock 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3928069 ']' 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.921 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.921 [2024-07-15 13:16:23.417585] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:01.921 [2024-07-15 13:16:23.417671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928069 ] 00:24:01.921 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.921 [2024-07-15 13:16:23.476055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.921 [2024-07-15 13:16:23.583565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.179 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.179 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:02.179 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:02.179 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:02.179 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:02.437 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.437 13:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.695 nvme0n1 00:24:02.695 13:16:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:02.695 13:16:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.953 Running I/O for 2 seconds... 00:24:04.853 00:24:04.853 Latency(us) 00:24:04.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.853 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:04.853 nvme0n1 : 2.01 21255.09 83.03 0.00 0.00 6012.36 2439.40 11747.93 00:24:04.853 =================================================================================================================== 00:24:04.853 Total : 21255.09 83.03 0.00 0.00 6012.36 2439.40 11747.93 00:24:04.853 { 00:24:04.853 "core_count": 1, 00:24:04.853 "test_results": [ 00:24:04.853 { 00:24:04.853 "job": "nvme0n1", 00:24:04.853 "test_status": "finished", 00:24:04.853 "core_mask": "0x2", 00:24:04.853 "workload": "randwrite", 00:24:04.853 "queue_depth": 128, 00:24:04.853 "io_size": 4096, 00:24:04.853 "runtime": 2.006155014038086, 00:24:04.853 "io_per_second": 21255.08746831626, 00:24:04.853 "MiB_per_second": 83.02768542311038, 00:24:04.853 "fails_per_second": 0.0, 00:24:04.853 "timeout_per_second": 0.0, 00:24:04.853 "average_latency_us": 6012.358925221509, 00:24:04.853 "min_latency_us": 2439.3955555555553, 00:24:04.853 "max_latency_us": 11747.934814814815 00:24:04.853 } 00:24:04.853 ] 00:24:04.853 } 00:24:04.853 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:04.853 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:04.853 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:04.853 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:04.853 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:04.853 | select(.opcode=="crc32c") 00:24:04.853 | "\(.module_name) \(.executed)"' 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3928069 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3928069 ']' 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3928069 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3928069 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3928069' 00:24:05.112 killing process with pid 3928069 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3928069 00:24:05.112 Received shutdown signal, test time was about 2.000000 seconds 00:24:05.112 00:24:05.112 Latency(us) 00:24:05.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.112 =================================================================================================================== 00:24:05.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.112 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3928069 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3928474 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3928474 /var/tmp/bperf.sock 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3928474 ']' 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.370 13:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:05.370 [2024-07-15 13:16:27.025328] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:05.370 [2024-07-15 13:16:27.025419] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928474 ] 00:24:05.370 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:05.370 Zero copy mechanism will not be used. 00:24:05.370 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.629 [2024-07-15 13:16:27.086928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.629 [2024-07-15 13:16:27.204566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.629 13:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.629 13:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:05.629 13:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:05.629 13:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:05.629 13:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:06.239 13:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.239 13:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.497 nvme0n1 00:24:06.497 13:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:06.497 13:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:06.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.497 Zero copy mechanism will not be used. 00:24:06.497 Running I/O for 2 seconds... 00:24:09.027 00:24:09.027 Latency(us) 00:24:09.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.027 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:09.027 nvme0n1 : 2.00 2648.98 331.12 0.00 0.00 6026.44 4708.88 14175.19 00:24:09.027 =================================================================================================================== 00:24:09.027 Total : 2648.98 331.12 0.00 0.00 6026.44 4708.88 14175.19 00:24:09.027 { 00:24:09.027 "core_count": 1, 00:24:09.027 "test_results": [ 00:24:09.027 { 00:24:09.027 "job": "nvme0n1", 00:24:09.027 "test_status": "finished", 00:24:09.027 "core_mask": "0x2", 00:24:09.027 "workload": "randwrite", 00:24:09.027 "queue_depth": 16, 00:24:09.027 "io_size": 131072, 00:24:09.027 "runtime": 2.0049240589141846, 00:24:09.027 "io_per_second": 2648.978215633111, 00:24:09.027 "MiB_per_second": 331.1222769541389, 00:24:09.027 "fails_per_second": 0.0, 00:24:09.027 "timeout_per_second": 0.0, 00:24:09.027 "average_latency_us": 6026.435485261198, 00:24:09.027 "min_latency_us": 4708.882962962963, 00:24:09.027 "max_latency_us": 14175.194074074074 00:24:09.027 } 00:24:09.027 ] 00:24:09.027 } 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:09.027 | select(.opcode=="crc32c") 00:24:09.027 | "\(.module_name) \(.executed)"' 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3928474 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3928474 ']' 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3928474 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3928474 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3928474' 00:24:09.027 killing process with pid 3928474 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3928474 00:24:09.027 Received shutdown signal, test time was about 2.000000 seconds 00:24:09.027 00:24:09.027 Latency(us) 00:24:09.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.027 =================================================================================================================== 00:24:09.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.027 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3928474 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3926963 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3926963 ']' 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3926963 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3926963 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3926963' 00:24:09.286 killing process with pid 3926963 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3926963 00:24:09.286 13:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3926963 00:24:09.543 00:24:09.543 real 0m17.116s 00:24:09.543 user 0m33.106s 00:24:09.543 sys 0m4.316s 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:09.543 ************************************ 00:24:09.543 END TEST nvmf_digest_clean 00:24:09.543 ************************************ 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:09.543 ************************************ 00:24:09.543 START TEST nvmf_digest_error 00:24:09.543 ************************************ 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3929026 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3929026 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3929026 ']' 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.543 13:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.543 [2024-07-15 13:16:31.189093] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:09.543 [2024-07-15 13:16:31.189186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.543 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.801 [2024-07-15 13:16:31.257519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.801 [2024-07-15 13:16:31.373470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.801 [2024-07-15 13:16:31.373532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.801 [2024-07-15 13:16:31.373558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.801 [2024-07-15 13:16:31.373573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.801 [2024-07-15 13:16:31.373585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.801 [2024-07-15 13:16:31.373621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:10.732 [2024-07-15 13:16:32.204183] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:10.732 null0 00:24:10.732 [2024-07-15 13:16:32.327090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.732 [2024-07-15 13:16:32.351344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3929182 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3929182 /var/tmp/bperf.sock 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3929182 ']' 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:10.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.732 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:10.732 [2024-07-15 13:16:32.400708] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:10.732 [2024-07-15 13:16:32.400784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929182 ] 00:24:10.990 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.990 [2024-07-15 13:16:32.467761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.990 [2024-07-15 13:16:32.588219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.247 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.247 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:11.247 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:11.247 13:16:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:11.505 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:11.505 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.505 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.505 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.505 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.505 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.762 nvme0n1 00:24:11.762 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:11.762 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.762 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.762 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.762 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:11.762 13:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:12.020 Running I/O for 2 seconds... 00:24:12.020 [2024-07-15 13:16:33.561272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.561324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.561345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.575713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.575754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.575775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.590843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.590888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.590925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.603299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.603335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.603355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.617534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.617573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.617594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.631772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.631807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.631827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.643368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.643403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.643423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.657364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.657410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.657428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.671866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.671913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.671931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.683041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.683071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.683087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.698457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.698494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.698514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.020 [2024-07-15 13:16:33.710197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.020 [2024-07-15 13:16:33.710233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.020 [2024-07-15 13:16:33.710254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.725020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.725052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.725070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.738779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.738815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.738834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.752580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.752617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.752637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.766424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.766460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.766479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.778057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.778086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.778102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.793281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.793319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.793339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.808336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.808373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.808392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.820754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.820789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.820808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.837125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.837171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.837192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.850471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.850506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.850525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.278 [2024-07-15 13:16:33.863942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.278 [2024-07-15 13:16:33.863973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.278 [2024-07-15 13:16:33.863990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.875792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.875828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.875846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.890350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.890386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.890405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.903774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.903811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.903836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.918117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.918147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.918162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.931164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.931195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.931229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.945493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.945529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.945548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.957214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.957264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.957283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.279 [2024-07-15 13:16:33.974456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.279 [2024-07-15 13:16:33.974491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.279 [2024-07-15 13:16:33.974510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.537 [2024-07-15 13:16:33.989126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.537 [2024-07-15 13:16:33.989159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.537 [2024-07-15 13:16:33.989176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.537 [2024-07-15 13:16:34.002052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.537 [2024-07-15 13:16:34.002085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.537 [2024-07-15 13:16:34.002102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.537 [2024-07-15 13:16:34.015670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.537 [2024-07-15 13:16:34.015705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.537 [2024-07-15 13:16:34.015724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.537 [2024-07-15 13:16:34.030569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.537 [2024-07-15 13:16:34.030609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.537 [2024-07-15 13:16:34.030629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.537 [2024-07-15 13:16:34.043147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.537 [2024-07-15 13:16:34.043193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.537 [2024-07-15 13:16:34.043213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.057433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.057470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.057489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.072023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.072056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.072077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.083514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.083550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.083569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.098918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.098965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.098981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.113616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.113652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.113671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.127180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.127215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.138780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.138816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.138836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.152554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.152590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.152609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.170165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.170204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.170227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.181449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.181503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.196720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.196759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.196780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.210236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.210272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.210293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.222901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.222940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.222961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.538 [2024-07-15 13:16:34.236164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.538 [2024-07-15 13:16:34.236203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.538 [2024-07-15 13:16:34.236224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.796 [2024-07-15 13:16:34.248749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.248785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.248804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.263540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.263581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.263602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.276589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.276624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.276643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.290020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.290055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.290073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.306227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.306263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.306281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.318092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.318128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.318148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.333631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.333666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.333686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.346465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.346504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.346525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.357772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.357808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.357828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.374628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.374665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.374685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.387469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.387505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.387524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.402005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.402040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.402059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.414469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.414504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.414524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.429039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.429074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.429093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.444217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.444256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.444276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.456434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.456468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.456488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.471537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.471574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.471594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.797 [2024-07-15 13:16:34.486641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:12.797 [2024-07-15 13:16:34.486679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.797 [2024-07-15 13:16:34.486702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.497778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.497813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.497838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.512156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.512190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.512209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.526498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.526535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.526559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.539484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.539519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.539538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.552135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.552171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.552190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.565194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.565230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.565249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.579924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.579959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.579979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.592951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.592987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.593010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.604693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.604728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.604747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.620526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.620571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.620593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.633556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.633592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.633612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.648826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.648862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.648890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.660628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.660663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.660683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.676323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.676362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.676384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.690270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.690306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.690325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.701738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.701773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.701794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.718045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.718082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.718101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.732653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.732689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.732709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.056 [2024-07-15 13:16:34.745385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.056 [2024-07-15 13:16:34.745422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.056 [2024-07-15 13:16:34.745446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.758169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.758204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.758223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.774700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.774739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.774760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.788365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.788400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.788420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.800670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.800705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.800725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.814220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.814255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.814275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.827818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.827854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.827873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.842066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.842104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.842124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.853427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.853463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.853489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.867607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.867643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.867662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.882541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.882580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.882601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.894848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.894892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.894915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.907570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.907606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.315 [2024-07-15 13:16:34.907625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.315 [2024-07-15 13:16:34.922539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.315 [2024-07-15 13:16:34.922575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.316 [2024-07-15 13:16:34.922594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.316 [2024-07-15 13:16:34.933682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.316 [2024-07-15 13:16:34.933716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.316 [2024-07-15 13:16:34.933735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.316 [2024-07-15 13:16:34.948459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.316 [2024-07-15 13:16:34.948494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.316 [2024-07-15 13:16:34.948513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.316 [2024-07-15 13:16:34.963007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.316 [2024-07-15 13:16:34.963042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.316 [2024-07-15 13:16:34.963066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.316 [2024-07-15 13:16:34.977270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.316 [2024-07-15 13:16:34.977307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.316 [2024-07-15 13:16:34.977327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.316 [2024-07-15 13:16:34.989593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.316 [2024-07-15 13:16:34.989628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.316 [2024-07-15 13:16:34.989647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.316 [2024-07-15 13:16:35.007689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.316 [2024-07-15 13:16:35.007725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.316 [2024-07-15 13:16:35.007745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.021309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.021346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.021370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.033452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.033488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.033507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.048127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.048163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.048183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.061456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.061492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.061510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.074183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.074218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.074237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.088603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.088639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.088666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.103782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.103817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.103838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.116087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.116123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.116143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.128797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.128832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.128851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.142447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.142482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.142501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.158508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.158543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.158562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.171896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.171930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.171949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.183078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.183113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.183132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.197336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.197371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.197390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.210951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.210992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.211012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.224644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.224678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.224697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.237506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.237540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.237566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.250348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.250383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.250402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.575 [2024-07-15 13:16:35.265638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.575 [2024-07-15 13:16:35.265674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.575 [2024-07-15 13:16:35.265694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.279300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.279338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.279358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.291254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.291289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.291308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.306611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.306646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.306665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.318144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.318179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.318198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.332669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.332704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.332725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.347894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.347929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.347948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.359472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.359506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.359525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.374489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.374526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.374545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.387522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.387558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.387576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.401432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.401467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.401485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.415164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.415199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.415224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.428941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.428976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.429000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.834 [2024-07-15 13:16:35.440514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.834 [2024-07-15 13:16:35.440549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.834 [2024-07-15 13:16:35.440575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.835 [2024-07-15 13:16:35.455958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.835 [2024-07-15 13:16:35.455997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.835 [2024-07-15 13:16:35.456016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.835 [2024-07-15 13:16:35.469191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.835 [2024-07-15 13:16:35.469227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.835 [2024-07-15 13:16:35.469245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.835 [2024-07-15 13:16:35.481208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.835 [2024-07-15 13:16:35.481242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.835 [2024-07-15 13:16:35.481260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.835 [2024-07-15 13:16:35.496860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.835 [2024-07-15 13:16:35.496905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.835 [2024-07-15 13:16:35.496924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.835 [2024-07-15 13:16:35.509175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.835 [2024-07-15 13:16:35.509210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.835 [2024-07-15 13:16:35.509229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.835 [2024-07-15 13:16:35.523597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:13.835 [2024-07-15 13:16:35.523632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.835 [2024-07-15 13:16:35.523650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.093 [2024-07-15 13:16:35.537845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:14.093 [2024-07-15 13:16:35.537888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.093 [2024-07-15 13:16:35.537910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.093 [2024-07-15 13:16:35.550408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1857d50) 00:24:14.093 [2024-07-15 13:16:35.550443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.093 [2024-07-15 13:16:35.550461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.093 00:24:14.093 Latency(us) 00:24:14.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.093 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:14.093 nvme0n1 : 2.00 18602.46 72.67 0.00 0.00 6870.73 3835.07 19223.89 00:24:14.093 =================================================================================================================== 00:24:14.093 Total : 18602.46 72.67 0.00 0.00 6870.73 3835.07 19223.89 00:24:14.093 { 00:24:14.093 "core_count": 1, 00:24:14.093 "test_results": [ 00:24:14.093 { 00:24:14.093 "job": "nvme0n1", 00:24:14.093 "test_status": "finished", 00:24:14.093 "core_mask": "0x2", 00:24:14.093 "workload": "randread", 00:24:14.093 "queue_depth": 128, 00:24:14.093 "io_size": 4096, 00:24:14.093 "runtime": 2.0045199394226074, 00:24:14.093 "io_per_second": 18602.45844391675, 00:24:14.093 "MiB_per_second": 72.6658532965498, 00:24:14.093 "fails_per_second": 0.0, 00:24:14.093 "timeout_per_second": 0.0, 00:24:14.093 "average_latency_us": 6870.732302545781, 00:24:14.093 "min_latency_us": 3835.0696296296296, 00:24:14.093 "max_latency_us": 19223.893333333333 00:24:14.093 } 00:24:14.093 ] 00:24:14.093 } 00:24:14.093 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:14.093 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:14.093 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:14.093 | .driver_specific 00:24:14.093 | .nvme_error 00:24:14.093 | .status_code 00:24:14.093 | .command_transient_transport_error' 00:24:14.093 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3929182 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3929182 ']' 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3929182 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3929182 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3929182' 00:24:14.352 killing process with pid 3929182 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3929182 00:24:14.352 Received shutdown signal, test time was about 2.000000 seconds 00:24:14.352 00:24:14.352 Latency(us) 00:24:14.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.352 =================================================================================================================== 00:24:14.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.352 13:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3929182 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3929592 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3929592 /var/tmp/bperf.sock 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3929592 ']' 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:14.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.610 13:16:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.611 [2024-07-15 13:16:36.178636] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:14.611 [2024-07-15 13:16:36.178714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929592 ] 00:24:14.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.611 Zero copy mechanism will not be used. 00:24:14.611 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.611 [2024-07-15 13:16:36.239821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.870 [2024-07-15 13:16:36.358403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.804 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.370 nvme0n1 00:24:16.370 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:16.370 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.370 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.370 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:16.370 13:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:16.370 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:16.370 Zero copy mechanism will not be used. 00:24:16.370 Running I/O for 2 seconds... 00:24:16.370 [2024-07-15 13:16:38.044141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.370 [2024-07-15 13:16:38.044209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.370 [2024-07-15 13:16:38.044227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.370 [2024-07-15 13:16:38.054767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.370 [2024-07-15 13:16:38.054814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.370 [2024-07-15 13:16:38.054831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.370 [2024-07-15 13:16:38.065298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.370 [2024-07-15 13:16:38.065330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.370 [2024-07-15 13:16:38.065348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.628 [2024-07-15 13:16:38.075963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.628 [2024-07-15 13:16:38.076009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.628 [2024-07-15 13:16:38.076027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.628 [2024-07-15 13:16:38.086743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.628 [2024-07-15 13:16:38.086777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.628 [2024-07-15 13:16:38.086796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.628 [2024-07-15 13:16:38.097716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.628 [2024-07-15 13:16:38.097751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.628 [2024-07-15 13:16:38.097771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.628 [2024-07-15 13:16:38.108056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.628 [2024-07-15 13:16:38.108087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.628 [2024-07-15 13:16:38.108105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.628 [2024-07-15 13:16:38.118423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.628 [2024-07-15 13:16:38.118457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.628 [2024-07-15 13:16:38.118476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.628 [2024-07-15 13:16:38.129111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.628 [2024-07-15 13:16:38.129147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.628 [2024-07-15 13:16:38.129182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.628 [2024-07-15 13:16:38.140155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.140185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.140220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.151097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.151126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.151158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.161854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.161895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.161930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.172648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.172682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.172700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.183503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.183537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.183556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.194282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.194316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.194335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.205416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.205450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.205469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.216208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.216236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.216268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.227131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.227175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.227191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.238130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.238175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.238194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.249124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.249172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.249189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.260160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.260210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.260229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.270338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.270372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.270390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.280832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.280867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.280896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.291126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.291172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.291189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.301261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.301295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.301315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.311431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.311467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.311492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.629 [2024-07-15 13:16:38.321682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.629 [2024-07-15 13:16:38.321716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.629 [2024-07-15 13:16:38.321736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.331956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.332009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.332024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.342310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.342344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.342363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.352698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.352731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.352749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.362944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.362971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.363002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.373190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.373234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.373253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.383457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.383490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.383509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.393759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.393792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.393810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.404098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.404131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.404163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.414366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.414400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.414419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.424734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.424768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.424787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.434957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.434984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.435016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.445276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.445309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.445328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.455345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.455378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.455396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.465758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.465792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.465811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.476195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.476228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.476247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.486402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.486436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.486454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.496651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.496685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.496703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.506885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.506931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.506947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.517137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.517166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.517200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.527294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.527327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.527345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.537460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.537492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.537511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.547784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.547817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.547836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.558180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.558226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.558245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.889 [2024-07-15 13:16:38.568498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.889 [2024-07-15 13:16:38.568532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.889 [2024-07-15 13:16:38.568552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.890 [2024-07-15 13:16:38.578890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:16.890 [2024-07-15 13:16:38.578936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.890 [2024-07-15 13:16:38.578959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.148 [2024-07-15 13:16:38.589163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.148 [2024-07-15 13:16:38.589212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.148 [2024-07-15 13:16:38.589230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.148 [2024-07-15 13:16:38.599431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.148 [2024-07-15 13:16:38.599465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.148 [2024-07-15 13:16:38.599483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.148 [2024-07-15 13:16:38.609649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.148 [2024-07-15 13:16:38.609682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.148 [2024-07-15 13:16:38.609701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.148 [2024-07-15 13:16:38.619775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.148 [2024-07-15 13:16:38.619805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.148 [2024-07-15 13:16:38.619821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.148 [2024-07-15 13:16:38.629399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.629432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.629451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.639588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.639622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.639641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.649802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.649835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.649853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.660189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.660223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.660241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.670420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.670459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.670479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.680699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.680733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.680752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.691016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.691046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.691062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.701337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.701371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.701390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.711571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.711604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.711622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.721413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.721450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.721469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.731938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.731968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.731986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.742183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.742212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.752389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.752423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.752442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.762713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.762746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.762765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.772928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.772956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.772987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.783183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.783229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.783247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.793358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.793391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.793409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.803595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.803628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.803646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.813850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.813893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.813927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.824184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.824214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.824248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.834406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.834440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.834458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.149 [2024-07-15 13:16:38.844635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.149 [2024-07-15 13:16:38.844685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.149 [2024-07-15 13:16:38.844702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.854848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.854901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.864969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.865012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.865028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.875277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.875311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.875329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.885565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.885599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.885617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.895720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.895753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.895772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.906009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.906038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.906054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.916278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.916312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.916331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.926564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.926598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.926617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.936790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.936824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.936842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.946948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.946978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.946995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.956997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.957025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.957056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.967235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.967268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.967287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.977377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.977410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.977429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.987477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.987510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.987529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:38.997632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:38.997665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:38.997684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.007800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.007834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.007860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.018162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.018205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.018230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.028313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.028346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.028365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.038818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.038851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.038870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.049059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.049087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.049119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.059383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.059417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.059437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.069696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.069729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.069747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.079903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.079948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.079964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.090197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.090243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.408 [2024-07-15 13:16:39.090262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.408 [2024-07-15 13:16:39.100356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.408 [2024-07-15 13:16:39.100390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.409 [2024-07-15 13:16:39.100409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.110491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.110531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.110551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.120746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.120779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.120798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.130946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.130974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.131006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.141132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.141160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.141176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.151319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.151352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.151371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.161583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.161617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.161636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.172006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.172035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.172051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.182454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.182487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.182506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.192725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.192758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.192776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.202975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.203003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.203020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.213313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.213346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.213364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.223638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.223671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.223689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.233834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.233867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.233895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.244076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.244105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.244135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.254337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.254371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.254390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.264676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.264709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.264727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.275030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.275074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.275089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.285353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.285386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.285414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.295756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.295790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.295808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.306051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.667 [2024-07-15 13:16:39.306080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.667 [2024-07-15 13:16:39.306095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.667 [2024-07-15 13:16:39.316273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.668 [2024-07-15 13:16:39.316306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.668 [2024-07-15 13:16:39.316325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 13:16:39.326337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.668 [2024-07-15 13:16:39.326371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.668 [2024-07-15 13:16:39.326389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 13:16:39.336568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.668 [2024-07-15 13:16:39.336603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.668 [2024-07-15 13:16:39.336622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 13:16:39.347087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.668 [2024-07-15 13:16:39.347116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.668 [2024-07-15 13:16:39.347148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 13:16:39.357310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.668 [2024-07-15 13:16:39.357343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.668 [2024-07-15 13:16:39.357362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.367485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.367519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.367537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.377832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.377888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.377910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.388078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.388122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.388137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.398241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.398288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.398308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.408308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.408342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.408361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.418524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.418558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.418576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.428774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.428807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.428826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.438949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.438978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.439009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.449182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.449211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.449245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.459233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.459266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.459285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.469333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.469366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.469384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.479453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.926 [2024-07-15 13:16:39.479487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.926 [2024-07-15 13:16:39.479506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.926 [2024-07-15 13:16:39.489660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.489694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.489713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.500086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.500115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.500147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.510342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.510375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.510394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.520443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.520476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.520495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.530699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.530733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.530751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.540939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.540966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.540982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.551260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.551295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.551320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.561362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.561397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.561416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.571575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.571608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.571627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.581780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.581813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.581832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.592012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.592044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.592061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.602233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.602267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.602285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.612398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.612431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.612449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.927 [2024-07-15 13:16:39.622636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:17.927 [2024-07-15 13:16:39.622669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.927 [2024-07-15 13:16:39.622687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.632874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.185 [2024-07-15 13:16:39.632938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.185 [2024-07-15 13:16:39.632954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.643220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.185 [2024-07-15 13:16:39.643266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.185 [2024-07-15 13:16:39.643285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.653537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.185 [2024-07-15 13:16:39.653570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.185 [2024-07-15 13:16:39.653588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.663830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.185 [2024-07-15 13:16:39.663863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.185 [2024-07-15 13:16:39.663890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.674186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.185 [2024-07-15 13:16:39.674233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.185 [2024-07-15 13:16:39.674251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.684310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.185 [2024-07-15 13:16:39.684343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.185 [2024-07-15 13:16:39.684361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.694532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.185 [2024-07-15 13:16:39.694565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.185 [2024-07-15 13:16:39.694583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.185 [2024-07-15 13:16:39.704729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.704762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.704780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.714842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.714883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.714905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.725198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.725227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.725267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.735401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.735435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.735453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.745641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.745674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.745693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.755768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.755801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.755820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.765897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.765941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.765958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.776249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.776282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.776301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.786537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.786571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.786590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.796788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.796821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.796840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.806978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.807006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.807037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.817338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.817377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.817397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.827522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.827555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.827574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.837835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.837869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.837896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.848033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.848062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.848095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.858256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.858290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.858309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.868612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.868645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.868663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.186 [2024-07-15 13:16:39.878764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.186 [2024-07-15 13:16:39.878797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.186 [2024-07-15 13:16:39.878816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.888897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.888943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.899133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.899164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.899196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.909291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.909326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.909345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.919706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.919740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.919759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.929985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.930015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.930031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.940179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.940224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.940243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.950617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.950650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.950669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.960799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.960834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.960853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.971006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.971037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.971054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.981275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.981309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.981327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:39.991485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:39.991519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:39.991545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:40.001625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:40.001671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:40.001703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:40.012352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:40.012403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:40.012424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:40.023070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:40.023105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:40.023133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-07-15 13:16:40.033198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x90e4f0) 00:24:18.445 [2024-07-15 13:16:40.033262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-07-15 13:16:40.033282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 00:24:18.445 Latency(us) 00:24:18.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.445 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:18.445 nvme0n1 : 2.00 3002.47 375.31 0.00 0.00 5322.54 1662.67 13883.92 00:24:18.445 =================================================================================================================== 00:24:18.445 Total : 3002.47 375.31 0.00 0.00 5322.54 1662.67 13883.92 00:24:18.445 { 00:24:18.445 "core_count": 1, 00:24:18.445 "test_results": [ 00:24:18.445 { 00:24:18.445 "job": "nvme0n1", 00:24:18.445 "test_status": "finished", 00:24:18.445 "core_mask": "0x2", 00:24:18.445 "workload": "randread", 00:24:18.445 "queue_depth": 16, 00:24:18.445 "io_size": 131072, 00:24:18.445 "runtime": 2.0040199756622314, 00:24:18.445 "io_per_second": 3002.4650452590295, 00:24:18.445 "MiB_per_second": 375.3081306573787, 00:24:18.445 "fails_per_second": 0.0, 00:24:18.445 "timeout_per_second": 0.0, 00:24:18.445 "average_latency_us": 5322.53742888975, 00:24:18.445 "min_latency_us": 1662.6725925925925, 00:24:18.445 "max_latency_us": 13883.922962962963 00:24:18.445 } 00:24:18.445 ] 00:24:18.445 } 00:24:18.445 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:18.445 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:18.445 | .driver_specific 00:24:18.445 | .nvme_error 00:24:18.445 | .status_code 00:24:18.445 | .command_transient_transport_error' 00:24:18.445 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:18.445 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3929592 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3929592 ']' 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3929592 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3929592 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3929592' 00:24:18.703 killing process with pid 3929592 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3929592 00:24:18.703 Received shutdown signal, test time was about 2.000000 seconds 00:24:18.703 00:24:18.703 Latency(us) 00:24:18.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.703 =================================================================================================================== 00:24:18.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.703 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3929592 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3930133 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3930133 /var/tmp/bperf.sock 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3930133 ']' 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.962 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.222 [2024-07-15 13:16:40.675400] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:19.222 [2024-07-15 13:16:40.675482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930133 ] 00:24:19.222 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.222 [2024-07-15 13:16:40.737813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.222 [2024-07-15 13:16:40.866058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.487 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.487 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:19.487 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:19.487 13:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:19.745 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:19.745 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.745 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.745 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.745 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.745 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.310 nvme0n1 00:24:20.310 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:20.310 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.310 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.310 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.310 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:20.310 13:16:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.310 Running I/O for 2 seconds... 00:24:20.311 [2024-07-15 13:16:41.951796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190edd58 00:24:20.311 [2024-07-15 13:16:41.952947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.311 [2024-07-15 13:16:41.952987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.311 [2024-07-15 13:16:41.964007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ee190 00:24:20.311 [2024-07-15 13:16:41.964937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.311 [2024-07-15 13:16:41.964966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.311 [2024-07-15 13:16:41.976258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ff3c8 00:24:20.311 [2024-07-15 13:16:41.977185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.311 [2024-07-15 13:16:41.977215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.311 [2024-07-15 13:16:41.989573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e9168 00:24:20.311 [2024-07-15 13:16:41.990674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.311 [2024-07-15 13:16:41.990702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.311 [2024-07-15 13:16:42.002731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fd640 00:24:20.311 [2024-07-15 13:16:42.003992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.311 [2024-07-15 13:16:42.004035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.015929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fd208 00:24:20.569 [2024-07-15 13:16:42.017343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.017370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.029171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eb328 00:24:20.569 [2024-07-15 13:16:42.030783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.030812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.042242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e9168 00:24:20.569 [2024-07-15 13:16:42.044023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.044050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.055269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e1710 00:24:20.569 [2024-07-15 13:16:42.057171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.057198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.068578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6fa8 00:24:20.569 [2024-07-15 13:16:42.070694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.070722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.077413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f6cc8 00:24:20.569 [2024-07-15 13:16:42.078223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.078250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.089345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6b70 00:24:20.569 [2024-07-15 13:16:42.090233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.090260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.102579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f96f8 00:24:20.569 [2024-07-15 13:16:42.103672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.103699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.115645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fc998 00:24:20.569 [2024-07-15 13:16:42.116911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.116939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.129499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e38d0 00:24:20.569 [2024-07-15 13:16:42.130961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.569 [2024-07-15 13:16:42.130988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.569 [2024-07-15 13:16:42.142298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f0350 00:24:20.569 [2024-07-15 13:16:42.143739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.143783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.154986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f8618 00:24:20.570 [2024-07-15 13:16:42.156447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.156478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.167660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ecc78 00:24:20.570 [2024-07-15 13:16:42.169094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.169125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.180797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fc128 00:24:20.570 [2024-07-15 13:16:42.182388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.182418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.191224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f3a28 00:24:20.570 [2024-07-15 13:16:42.192100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.192131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.204337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb480 00:24:20.570 [2024-07-15 13:16:42.205423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.205454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.216371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f1868 00:24:20.570 [2024-07-15 13:16:42.217443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.217480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.229946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de8a8 00:24:20.570 [2024-07-15 13:16:42.231170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.231203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.243311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de470 00:24:20.570 [2024-07-15 13:16:42.244729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.244761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.570 [2024-07-15 13:16:42.256665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e9e10 00:24:20.570 [2024-07-15 13:16:42.258263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.570 [2024-07-15 13:16:42.258295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.828 [2024-07-15 13:16:42.270063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f1868 00:24:20.828 [2024-07-15 13:16:42.271825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.828 [2024-07-15 13:16:42.271856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.828 [2024-07-15 13:16:42.283410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e1f80 00:24:20.828 [2024-07-15 13:16:42.285349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.828 [2024-07-15 13:16:42.285380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.828 [2024-07-15 13:16:42.296710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f6458 00:24:20.828 [2024-07-15 13:16:42.298830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.828 [2024-07-15 13:16:42.298860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.828 [2024-07-15 13:16:42.305777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eea00 00:24:20.828 [2024-07-15 13:16:42.306692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.306723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.319168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fc128 00:24:20.829 [2024-07-15 13:16:42.320231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.320262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.331231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6b70 00:24:20.829 [2024-07-15 13:16:42.332290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.332321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.344650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de470 00:24:20.829 [2024-07-15 13:16:42.345904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.345946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.358008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de8a8 00:24:20.829 [2024-07-15 13:16:42.359424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.359455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.371345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190edd58 00:24:20.829 [2024-07-15 13:16:42.372960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.372991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.384728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6b70 00:24:20.829 [2024-07-15 13:16:42.386500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.386531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.398067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e23b8 00:24:20.829 [2024-07-15 13:16:42.399983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.400013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.411374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eb760 00:24:20.829 [2024-07-15 13:16:42.413487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.413519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.420427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e9168 00:24:20.829 [2024-07-15 13:16:42.421337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.421367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.432445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f0bc0 00:24:20.829 [2024-07-15 13:16:42.433343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.433373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.445755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eea00 00:24:20.829 [2024-07-15 13:16:42.446824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.446854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.459069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190dfdc0 00:24:20.829 [2024-07-15 13:16:42.460318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.460348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.472384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e01f8 00:24:20.829 [2024-07-15 13:16:42.473801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.473832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.485685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e9e10 00:24:20.829 [2024-07-15 13:16:42.487278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.487309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.499015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eea00 00:24:20.829 [2024-07-15 13:16:42.500772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.500803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.512330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e1f80 00:24:20.829 [2024-07-15 13:16:42.514274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.829 [2024-07-15 13:16:42.514305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.829 [2024-07-15 13:16:42.525639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e12d8 00:24:21.087 [2024-07-15 13:16:42.527745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.087 [2024-07-15 13:16:42.527776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:21.087 [2024-07-15 13:16:42.534696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e2c28 00:24:21.087 [2024-07-15 13:16:42.535606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.087 [2024-07-15 13:16:42.535636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.087 [2024-07-15 13:16:42.546741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ec840 00:24:21.087 [2024-07-15 13:16:42.547639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.087 [2024-07-15 13:16:42.547669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:21.087 [2024-07-15 13:16:42.560075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f8a50 00:24:21.087 [2024-07-15 13:16:42.561123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.087 [2024-07-15 13:16:42.561153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:21.087 [2024-07-15 13:16:42.573387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de470 00:24:21.087 [2024-07-15 13:16:42.574633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.087 [2024-07-15 13:16:42.574663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:21.087 [2024-07-15 13:16:42.586706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de8a8 00:24:21.088 [2024-07-15 13:16:42.588106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.588136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.600004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190edd58 00:24:21.088 [2024-07-15 13:16:42.601587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.601618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.613342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f8a50 00:24:21.088 [2024-07-15 13:16:42.615090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.615121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.626652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e23b8 00:24:21.088 [2024-07-15 13:16:42.628593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.628624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.639959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f8e88 00:24:21.088 [2024-07-15 13:16:42.642048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.642078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.648983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e8d30 00:24:21.088 [2024-07-15 13:16:42.649854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.649892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.662283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb480 00:24:21.088 [2024-07-15 13:16:42.663371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.663407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.675626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e49b0 00:24:21.088 [2024-07-15 13:16:42.676855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.676893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.688947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f0350 00:24:21.088 [2024-07-15 13:16:42.690355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.690385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.701812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f6020 00:24:21.088 [2024-07-15 13:16:42.703255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.703286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.714910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ebb98 00:24:21.088 [2024-07-15 13:16:42.716517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.716549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.725316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eb760 00:24:21.088 [2024-07-15 13:16:42.726189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.726220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.738394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fac10 00:24:21.088 [2024-07-15 13:16:42.739465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.739496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.751705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f81e0 00:24:21.088 [2024-07-15 13:16:42.752951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.752982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.763736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f1868 00:24:21.088 [2024-07-15 13:16:42.764978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.765008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:21.088 [2024-07-15 13:16:42.777054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e27f0 00:24:21.088 [2024-07-15 13:16:42.778471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.088 [2024-07-15 13:16:42.778501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.790377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ef270 00:24:21.346 [2024-07-15 13:16:42.791953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.791983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.803700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ec840 00:24:21.346 [2024-07-15 13:16:42.805451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.805481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.817012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f35f0 00:24:21.346 [2024-07-15 13:16:42.818930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.818960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.830328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f2510 00:24:21.346 [2024-07-15 13:16:42.832425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.832455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.839363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ef6a8 00:24:21.346 [2024-07-15 13:16:42.840265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.840295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.852700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e4140 00:24:21.346 [2024-07-15 13:16:42.853772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.853802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.866044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e12d8 00:24:21.346 [2024-07-15 13:16:42.867289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.867320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.878077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e3498 00:24:21.346 [2024-07-15 13:16:42.879317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.879347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.891426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f20d8 00:24:21.346 [2024-07-15 13:16:42.892845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.892882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.346 [2024-07-15 13:16:42.904741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fef90 00:24:21.346 [2024-07-15 13:16:42.906319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.346 [2024-07-15 13:16:42.906350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:42.918102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f6cc8 00:24:21.347 [2024-07-15 13:16:42.919859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:42.919899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:42.931422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f3a28 00:24:21.347 [2024-07-15 13:16:42.933349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:42.933379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:42.944762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e2c28 00:24:21.347 [2024-07-15 13:16:42.946882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:42.946912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:42.953809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fe720 00:24:21.347 [2024-07-15 13:16:42.954685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:42.954715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:42.967168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fac10 00:24:21.347 [2024-07-15 13:16:42.968231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:42.968269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:42.979223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e23b8 00:24:21.347 [2024-07-15 13:16:42.980291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:42.980323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:42.992541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f20d8 00:24:21.347 [2024-07-15 13:16:42.993778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:42.993815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:43.005855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e3498 00:24:21.347 [2024-07-15 13:16:43.007284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:43.007315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:43.019212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eea00 00:24:21.347 [2024-07-15 13:16:43.020789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:43.020820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 13:16:43.032539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e23b8 00:24:21.347 [2024-07-15 13:16:43.034299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.347 [2024-07-15 13:16:43.034330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.045891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f46d0 00:24:21.604 [2024-07-15 13:16:43.047789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.047820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.059226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f57b0 00:24:21.604 [2024-07-15 13:16:43.061333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.061363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.068297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f2948 00:24:21.604 [2024-07-15 13:16:43.069210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.069240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.081161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ea680 00:24:21.604 [2024-07-15 13:16:43.082048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.082078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.093974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eaef0 00:24:21.604 [2024-07-15 13:16:43.094871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.094908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.105823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e5658 00:24:21.604 [2024-07-15 13:16:43.106732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.106763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.119179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e12d8 00:24:21.604 [2024-07-15 13:16:43.120238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.120269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.133366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e99d8 00:24:21.604 [2024-07-15 13:16:43.134629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.134660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.146477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f1430 00:24:21.604 [2024-07-15 13:16:43.147868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.147906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.158466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f5378 00:24:21.604 [2024-07-15 13:16:43.159873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.159910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.171762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ecc78 00:24:21.604 [2024-07-15 13:16:43.173347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.604 [2024-07-15 13:16:43.173377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.604 [2024-07-15 13:16:43.185077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f3a28 00:24:21.605 [2024-07-15 13:16:43.186819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.186850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.198400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ec408 00:24:21.605 [2024-07-15 13:16:43.200326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.200356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.211705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e3060 00:24:21.605 [2024-07-15 13:16:43.213802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.220742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eee38 00:24:21.605 [2024-07-15 13:16:43.221641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.221671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.232911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f8e88 00:24:21.605 [2024-07-15 13:16:43.233793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.233824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.246251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f6458 00:24:21.605 [2024-07-15 13:16:43.247314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.259588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f20d8 00:24:21.605 [2024-07-15 13:16:43.260819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.260850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.272897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e3498 00:24:21.605 [2024-07-15 13:16:43.274301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.274331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.286208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ff3c8 00:24:21.605 [2024-07-15 13:16:43.287789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.287820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 13:16:43.299536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f6458 00:24:21.605 [2024-07-15 13:16:43.301299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.605 [2024-07-15 13:16:43.301330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.312874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6fa8 00:24:21.862 [2024-07-15 13:16:43.314775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.314806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.326180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f57b0 00:24:21.862 [2024-07-15 13:16:43.328281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.328316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.335237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e88f8 00:24:21.862 [2024-07-15 13:16:43.336115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.336145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.347270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f4b08 00:24:21.862 [2024-07-15 13:16:43.348139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.348178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.360600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eee38 00:24:21.862 [2024-07-15 13:16:43.361670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.361701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.373995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f1ca0 00:24:21.862 [2024-07-15 13:16:43.375232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.375264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.387386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f5378 00:24:21.862 [2024-07-15 13:16:43.388799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.388830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.400738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ecc78 00:24:21.862 [2024-07-15 13:16:43.402326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.402356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.414091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190eee38 00:24:21.862 [2024-07-15 13:16:43.415841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.415872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.427427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ec408 00:24:21.862 [2024-07-15 13:16:43.429359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.429390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.440746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e3060 00:24:21.862 [2024-07-15 13:16:43.442853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.442891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.449815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e5ec8 00:24:21.862 [2024-07-15 13:16:43.450720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.450750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.463155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fac10 00:24:21.862 [2024-07-15 13:16:43.464206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.464236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.475196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de470 00:24:21.862 [2024-07-15 13:16:43.476263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.476304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.488546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f5378 00:24:21.862 [2024-07-15 13:16:43.489778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.489810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.501895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f1ca0 00:24:21.862 [2024-07-15 13:16:43.503311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.503341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.515234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ff3c8 00:24:21.862 [2024-07-15 13:16:43.516813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.528537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190de470 00:24:21.862 [2024-07-15 13:16:43.530293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.530323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.541835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6fa8 00:24:21.862 [2024-07-15 13:16:43.543762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.543792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.862 [2024-07-15 13:16:43.555186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e1b48 00:24:21.862 [2024-07-15 13:16:43.557291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.862 [2024-07-15 13:16:43.557322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.564226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f5be8 00:24:22.119 [2024-07-15 13:16:43.565109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.565139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.576268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f8e88 00:24:22.119 [2024-07-15 13:16:43.577144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.577174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.589594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e27f0 00:24:22.119 [2024-07-15 13:16:43.590657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.590687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.602897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f2510 00:24:22.119 [2024-07-15 13:16:43.604109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.604139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.616199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e1710 00:24:22.119 [2024-07-15 13:16:43.617610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.617640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.628088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ecc78 00:24:22.119 [2024-07-15 13:16:43.628960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.628990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.639681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190f31b8 00:24:22.119 [2024-07-15 13:16:43.640563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.640593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.653011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fcdd0 00:24:22.119 [2024-07-15 13:16:43.654039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.654076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.666329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e12d8 00:24:22.119 [2024-07-15 13:16:43.667563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.667594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.679639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6300 00:24:22.119 [2024-07-15 13:16:43.681049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.681079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.692979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190ed4e8 00:24:22.119 [2024-07-15 13:16:43.694537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.694568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.706302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fcdd0 00:24:22.119 [2024-07-15 13:16:43.708022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.708052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.719614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fd640 00:24:22.119 [2024-07-15 13:16:43.721535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.721565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.732948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190e6b70 00:24:22.119 [2024-07-15 13:16:43.735011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.735043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.741975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fe2e8 00:24:22.119 [2024-07-15 13:16:43.742865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.742904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.756957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.119 [2024-07-15 13:16:43.757274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.757305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.770934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.119 [2024-07-15 13:16:43.771257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.771287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.784867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.119 [2024-07-15 13:16:43.785168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.785198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.798893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.119 [2024-07-15 13:16:43.799212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.799241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.119 [2024-07-15 13:16:43.812788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.119 [2024-07-15 13:16:43.813089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.119 [2024-07-15 13:16:43.813119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.826751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.827049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.827079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.840667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.840984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.841013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.854653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.854980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.855010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.868581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.868909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.868938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.882562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.882894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.882924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.896486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.896803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.896832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.910459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.910772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.910801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.924384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.924715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.924743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 [2024-07-15 13:16:43.938303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24056b0) with pdu=0x2000190fb8b8 00:24:22.377 [2024-07-15 13:16:43.938618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.377 [2024-07-15 13:16:43.938648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.377 00:24:22.377 Latency(us) 00:24:22.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.377 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.377 nvme0n1 : 2.01 19846.23 77.52 0.00 0.00 6434.23 2609.30 18252.99 00:24:22.377 =================================================================================================================== 00:24:22.377 Total : 19846.23 77.52 0.00 0.00 6434.23 2609.30 18252.99 00:24:22.377 { 00:24:22.377 "core_count": 1, 00:24:22.377 "test_results": [ 00:24:22.377 { 00:24:22.377 "job": "nvme0n1", 00:24:22.377 "test_status": "finished", 00:24:22.377 "core_mask": "0x2", 00:24:22.377 "workload": "randwrite", 00:24:22.377 "queue_depth": 128, 00:24:22.377 "io_size": 4096, 00:24:22.377 "runtime": 2.0068299770355225, 00:24:22.377 "io_per_second": 19846.225141142997, 00:24:22.377 "MiB_per_second": 77.52431695758983, 00:24:22.377 "fails_per_second": 0.0, 00:24:22.377 "timeout_per_second": 0.0, 00:24:22.377 "average_latency_us": 6434.233852082473, 00:24:22.377 "min_latency_us": 2609.303703703704, 00:24:22.377 "max_latency_us": 18252.98962962963 00:24:22.377 } 00:24:22.377 ] 00:24:22.377 } 00:24:22.377 13:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:22.377 13:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:22.377 13:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:22.377 | .driver_specific 00:24:22.377 | .nvme_error 00:24:22.377 | .status_code 00:24:22.377 | .command_transient_transport_error' 00:24:22.377 13:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:22.634 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:24:22.634 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3930133 00:24:22.634 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3930133 ']' 00:24:22.634 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3930133 00:24:22.634 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:22.635 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.635 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3930133 00:24:22.635 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.635 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.635 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3930133' 00:24:22.635 killing process with pid 3930133 00:24:22.635 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3930133 00:24:22.635 Received shutdown signal, test time was about 2.000000 seconds 00:24:22.635 00:24:22.635 Latency(us) 00:24:22.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.635 =================================================================================================================== 00:24:22.635 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.635 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3930133 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3930576 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3930576 /var/tmp/bperf.sock 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3930576 ']' 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:22.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.892 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.188 [2024-07-15 13:16:44.595628] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:23.188 [2024-07-15 13:16:44.595718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930576 ] 00:24:23.188 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.188 Zero copy mechanism will not be used. 00:24:23.188 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.188 [2024-07-15 13:16:44.661322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.188 [2024-07-15 13:16:44.778834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.445 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.445 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:23.445 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:23.445 13:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:23.702 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:23.702 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.702 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.702 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.702 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.702 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.266 nvme0n1 00:24:24.266 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:24.266 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.266 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.266 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.266 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:24.266 13:16:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:24.266 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:24.266 Zero copy mechanism will not be used. 00:24:24.266 Running I/O for 2 seconds... 00:24:24.266 [2024-07-15 13:16:45.820017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.820420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.820464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.833603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.834014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.834044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.847172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.847560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.847593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.859958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.860342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.860376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.874768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.875049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.875079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.889061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.889473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.889506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.903256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.903623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.915893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.916251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.916278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.928432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.928735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.928763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.941381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.266 [2024-07-15 13:16:45.941661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.266 [2024-07-15 13:16:45.941704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.266 [2024-07-15 13:16:45.954453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.267 [2024-07-15 13:16:45.954787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.267 [2024-07-15 13:16:45.954814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:45.966932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:45.967274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:45.967303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:45.980682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:45.981041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:45.981070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:45.993975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:45.994327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:45.994355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.006278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.006628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.006656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.020177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.020558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.020586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.032123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.032473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.032515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.045035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.045314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.045357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.058927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.059315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.059362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.071847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.072212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.084991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.085345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.085394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.099133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.099483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.099534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.111256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.111593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.111621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.125148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.125521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.139718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.140063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.140093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.152869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.153230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.153275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.165526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.165913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.165942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.179345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.179679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.179707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.192000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.192240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.192268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.204411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.204766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.204793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.524 [2024-07-15 13:16:46.218235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.524 [2024-07-15 13:16:46.218583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.524 [2024-07-15 13:16:46.218611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.781 [2024-07-15 13:16:46.230501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.781 [2024-07-15 13:16:46.230847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.781 [2024-07-15 13:16:46.230901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.781 [2024-07-15 13:16:46.244917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.781 [2024-07-15 13:16:46.245268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.781 [2024-07-15 13:16:46.245312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.781 [2024-07-15 13:16:46.259773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.781 [2024-07-15 13:16:46.260143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.781 [2024-07-15 13:16:46.260187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.781 [2024-07-15 13:16:46.273524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.781 [2024-07-15 13:16:46.273915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.781 [2024-07-15 13:16:46.273943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.781 [2024-07-15 13:16:46.286028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.781 [2024-07-15 13:16:46.286392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.781 [2024-07-15 13:16:46.286439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.781 [2024-07-15 13:16:46.297839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.781 [2024-07-15 13:16:46.298147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.781 [2024-07-15 13:16:46.298174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.781 [2024-07-15 13:16:46.310837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.781 [2024-07-15 13:16:46.311216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.781 [2024-07-15 13:16:46.311258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.323825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.324207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.324253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.336759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.336939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.336968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.350476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.350824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.350852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.363392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.363725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.363752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.377646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.378028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.378056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.390976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.391331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.391358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.403930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.404283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.404331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.416867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.417067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.417095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.429700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.430089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.430131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.443110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.443372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.443405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.454538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.454983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.455011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.466846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.467309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.467337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.782 [2024-07-15 13:16:46.479829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:24.782 [2024-07-15 13:16:46.480176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.782 [2024-07-15 13:16:46.480205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.492663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.493009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.493037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.504275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.504750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.504778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.516804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.517228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.517255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.528738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.529234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.529261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.540389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.540893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.540922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.552716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.553187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.553215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.565367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.565837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.565888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.578020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.578546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.578573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.591160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.591637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.591666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.603691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.604112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.604141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.616732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.617143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.617171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.629761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.630230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.630272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.641479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.641872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.641922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.653612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.654094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.654127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.665666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.666060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.666088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.677193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.677667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.677709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.689549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.689985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.690013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.702571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.702939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.702968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.714390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.714873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.714907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.727288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.040 [2024-07-15 13:16:46.727695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.040 [2024-07-15 13:16:46.727740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.040 [2024-07-15 13:16:46.738531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.738903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.738931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.750270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.750692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.750734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.762195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.762628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.762656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.773755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.774185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.774228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.786470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.786893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.786922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.798952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.799324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.799353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.811342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.811843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.811902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.823579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.824102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.824131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.836416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.836840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.836868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.848445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.848961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.848990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.861501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.862030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.862059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.874512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.874954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.874982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.887045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.298 [2024-07-15 13:16:46.887539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.298 [2024-07-15 13:16:46.887567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.298 [2024-07-15 13:16:46.899438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.899958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.899987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.299 [2024-07-15 13:16:46.911674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.912123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.912152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.299 [2024-07-15 13:16:46.923204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.923780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.923822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.299 [2024-07-15 13:16:46.935229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.935669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.935696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.299 [2024-07-15 13:16:46.947062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.947562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.947588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.299 [2024-07-15 13:16:46.959208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.959720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.959762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.299 [2024-07-15 13:16:46.971937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.972419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.972451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.299 [2024-07-15 13:16:46.984694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.299 [2024-07-15 13:16:46.985078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.299 [2024-07-15 13:16:46.985106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.556 [2024-07-15 13:16:46.997700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.556 [2024-07-15 13:16:46.998050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-15 13:16:46.998078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.556 [2024-07-15 13:16:47.009506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.556 [2024-07-15 13:16:47.009935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-15 13:16:47.009963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.556 [2024-07-15 13:16:47.021351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.556 [2024-07-15 13:16:47.021891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-15 13:16:47.021918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.556 [2024-07-15 13:16:47.033119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.556 [2024-07-15 13:16:47.033643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-15 13:16:47.033671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.556 [2024-07-15 13:16:47.044991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.556 [2024-07-15 13:16:47.045377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-15 13:16:47.045419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.556 [2024-07-15 13:16:47.057125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.556 [2024-07-15 13:16:47.057567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.057609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.070105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.070478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.070507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.082170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.082586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.082614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.094737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.095222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.095250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.106822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.107283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.107312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.119561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.119996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.120024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.131201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.131645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.131687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.142281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.142899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.142927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.155502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.156055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.156083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.167218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.167693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.167721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.179024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.179552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.179579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.190919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.191340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.191368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.203647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.204149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.204192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.216616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.217078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.217106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.229433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.229926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.229969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.242424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.557 [2024-07-15 13:16:47.243019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.557 [2024-07-15 13:16:47.243047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.557 [2024-07-15 13:16:47.255512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.256036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.256064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.267707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.268098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.268142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.279255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.279731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.279758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.292048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.292383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.292417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.304517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.304917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.304945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.316525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.316943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.316971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.328241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.328668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.328695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.340500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.341015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.341043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.352691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.353127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.353158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.365160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.365621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.815 [2024-07-15 13:16:47.365650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.815 [2024-07-15 13:16:47.376988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.815 [2024-07-15 13:16:47.377500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.377526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.390186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.390597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.390625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.401811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.402278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.402319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.414646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.415067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.415095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.427044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.427526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.427567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.440079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.440558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.440600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.452265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.452768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.452799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.465668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.466094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.466122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.476905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.477254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.477282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.489426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.489898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.489926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.816 [2024-07-15 13:16:47.502720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:25.816 [2024-07-15 13:16:47.503106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-15 13:16:47.503140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.514257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.514730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.514758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.526905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.527262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.527291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.539518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.539918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.539946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.552136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.552446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.552474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.563728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.564252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.564280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.576566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.577056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.577085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.589926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.590422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.590449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.602004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.602388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.602416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.615148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.615713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.615740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.627244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.627712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.627740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.638379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.638953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.638979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.650821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.651190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.651234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.663510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.663997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.664024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.675119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.675489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.675517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.687460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.687894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.687922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.700018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.700444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.700471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.712525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.713011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.713039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.724873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.725367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.725394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.737727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.738143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.738171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.749139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.749482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.749511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.074 [2024-07-15 13:16:47.761887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.074 [2024-07-15 13:16:47.762342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.074 [2024-07-15 13:16:47.762372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.333 [2024-07-15 13:16:47.774533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.333 [2024-07-15 13:16:47.774934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.333 [2024-07-15 13:16:47.774962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.333 [2024-07-15 13:16:47.787349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.333 [2024-07-15 13:16:47.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.333 [2024-07-15 13:16:47.787703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.333 [2024-07-15 13:16:47.800252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.333 [2024-07-15 13:16:47.800618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.333 [2024-07-15 13:16:47.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.333 [2024-07-15 13:16:47.813146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x223aaf0) with pdu=0x2000190fef90 00:24:26.333 [2024-07-15 13:16:47.813493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.333 [2024-07-15 13:16:47.813521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.333 00:24:26.333 Latency(us) 00:24:26.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.333 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:26.333 nvme0n1 : 2.01 2462.75 307.84 0.00 0.00 6480.56 2330.17 15437.37 00:24:26.333 =================================================================================================================== 00:24:26.333 Total : 2462.75 307.84 0.00 0.00 6480.56 2330.17 15437.37 00:24:26.333 { 00:24:26.333 "core_count": 1, 00:24:26.333 "test_results": [ 00:24:26.333 { 00:24:26.333 "job": "nvme0n1", 00:24:26.333 "test_status": "finished", 00:24:26.333 "core_mask": "0x2", 00:24:26.333 "workload": "randwrite", 00:24:26.333 "queue_depth": 16, 00:24:26.333 "io_size": 131072, 00:24:26.333 "runtime": 2.0066990852355957, 00:24:26.333 "io_per_second": 2462.751015473671, 00:24:26.333 "MiB_per_second": 307.8438769342089, 00:24:26.333 "fails_per_second": 0.0, 00:24:26.333 "timeout_per_second": 0.0, 00:24:26.333 "average_latency_us": 6480.564888709025, 00:24:26.333 "min_latency_us": 2330.168888888889, 00:24:26.333 "max_latency_us": 15437.368888888888 00:24:26.333 } 00:24:26.333 ] 00:24:26.333 } 00:24:26.333 13:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:26.333 13:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:26.333 13:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:26.333 | .driver_specific 00:24:26.333 | .nvme_error 00:24:26.333 | .status_code 00:24:26.333 | .command_transient_transport_error' 00:24:26.333 13:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3930576 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3930576 ']' 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3930576 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3930576 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3930576' 00:24:26.591 killing process with pid 3930576 00:24:26.591 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3930576 00:24:26.591 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.591 00:24:26.591 Latency(us) 00:24:26.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.591 =================================================================================================================== 00:24:26.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.592 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3930576 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3929026 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3929026 ']' 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3929026 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3929026 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3929026' 00:24:26.850 killing process with pid 3929026 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3929026 00:24:26.850 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3929026 00:24:27.111 00:24:27.111 real 0m17.582s 00:24:27.111 user 0m35.250s 00:24:27.111 sys 0m4.004s 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.111 ************************************ 00:24:27.111 END TEST nvmf_digest_error 00:24:27.111 ************************************ 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.111 rmmod nvme_tcp 00:24:27.111 rmmod nvme_fabrics 00:24:27.111 rmmod nvme_keyring 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3929026 ']' 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3929026 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3929026 ']' 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3929026 00:24:27.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3929026) - No such process 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3929026 is not found' 00:24:27.111 Process with pid 3929026 is not found 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.111 13:16:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.645 13:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.645 00:24:29.645 real 0m39.091s 00:24:29.645 user 1m9.208s 00:24:29.645 sys 0m9.841s 00:24:29.645 13:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.645 13:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:29.645 ************************************ 00:24:29.645 END TEST nvmf_digest 00:24:29.645 ************************************ 00:24:29.645 13:16:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:29.645 13:16:50 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:29.645 13:16:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:29.645 13:16:50 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:29.645 13:16:50 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:29.645 13:16:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:29.645 13:16:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.645 13:16:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.645 ************************************ 00:24:29.645 START TEST nvmf_bdevperf 00:24:29.645 ************************************ 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:29.645 * Looking for test storage... 00:24:29.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.645 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.646 13:16:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.646 13:16:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.646 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.646 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.646 13:16:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.646 13:16:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:31.546 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:31.546 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:31.546 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:31.546 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.546 13:16:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:24:31.546 00:24:31.546 --- 10.0.0.2 ping statistics --- 00:24:31.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.546 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:24:31.546 00:24:31.546 --- 10.0.0.1 ping statistics --- 00:24:31.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.546 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3933009 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3933009 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3933009 ']' 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.546 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.547 13:16:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.547 [2024-07-15 13:16:53.122570] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:31.547 [2024-07-15 13:16:53.122652] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.547 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.547 [2024-07-15 13:16:53.197480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:31.803 [2024-07-15 13:16:53.321088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.803 [2024-07-15 13:16:53.321147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.803 [2024-07-15 13:16:53.321173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.803 [2024-07-15 13:16:53.321203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.803 [2024-07-15 13:16:53.321221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.803 [2024-07-15 13:16:53.321322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.803 [2024-07-15 13:16:53.324899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.803 [2024-07-15 13:16:53.324923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 [2024-07-15 13:16:54.124027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 Malloc0 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 [2024-07-15 13:16:54.188938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.732 { 00:24:32.732 "params": { 00:24:32.732 "name": "Nvme$subsystem", 00:24:32.732 "trtype": "$TEST_TRANSPORT", 00:24:32.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.732 "adrfam": "ipv4", 00:24:32.732 "trsvcid": "$NVMF_PORT", 00:24:32.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.732 "hdgst": ${hdgst:-false}, 00:24:32.732 "ddgst": ${ddgst:-false} 00:24:32.732 }, 00:24:32.732 "method": "bdev_nvme_attach_controller" 00:24:32.732 } 00:24:32.732 EOF 00:24:32.732 )") 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:32.732 13:16:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:32.732 "params": { 00:24:32.732 "name": "Nvme1", 00:24:32.732 "trtype": "tcp", 00:24:32.732 "traddr": "10.0.0.2", 00:24:32.732 "adrfam": "ipv4", 00:24:32.732 "trsvcid": "4420", 00:24:32.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.732 "hdgst": false, 00:24:32.732 "ddgst": false 00:24:32.732 }, 00:24:32.732 "method": "bdev_nvme_attach_controller" 00:24:32.732 }' 00:24:32.732 [2024-07-15 13:16:54.238271] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:32.732 [2024-07-15 13:16:54.238344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933160 ] 00:24:32.732 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.732 [2024-07-15 13:16:54.297128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.732 [2024-07-15 13:16:54.411402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.296 Running I/O for 1 seconds... 00:24:34.261 00:24:34.261 Latency(us) 00:24:34.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:34.261 Verification LBA range: start 0x0 length 0x4000 00:24:34.261 Nvme1n1 : 1.01 8620.26 33.67 0.00 0.00 14761.20 958.77 14757.74 00:24:34.261 =================================================================================================================== 00:24:34.261 Total : 8620.26 33.67 0.00 0.00 14761.20 958.77 14757.74 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3933428 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:34.518 { 00:24:34.518 "params": { 00:24:34.518 "name": "Nvme$subsystem", 00:24:34.518 "trtype": "$TEST_TRANSPORT", 00:24:34.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.518 "adrfam": "ipv4", 00:24:34.518 "trsvcid": "$NVMF_PORT", 00:24:34.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.518 "hdgst": ${hdgst:-false}, 00:24:34.518 "ddgst": ${ddgst:-false} 00:24:34.518 }, 00:24:34.518 "method": "bdev_nvme_attach_controller" 00:24:34.518 } 00:24:34.518 EOF 00:24:34.518 )") 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:34.518 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:34.519 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:34.519 13:16:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:34.519 "params": { 00:24:34.519 "name": "Nvme1", 00:24:34.519 "trtype": "tcp", 00:24:34.519 "traddr": "10.0.0.2", 00:24:34.519 "adrfam": "ipv4", 00:24:34.519 "trsvcid": "4420", 00:24:34.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.519 "hdgst": false, 00:24:34.519 "ddgst": false 00:24:34.519 }, 00:24:34.519 "method": "bdev_nvme_attach_controller" 00:24:34.519 }' 00:24:34.519 [2024-07-15 13:16:56.076570] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:34.519 [2024-07-15 13:16:56.076645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933428 ] 00:24:34.519 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.519 [2024-07-15 13:16:56.136984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.775 [2024-07-15 13:16:56.248452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.775 Running I/O for 15 seconds... 00:24:38.060 13:16:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3933009 00:24:38.060 13:16:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:38.060 [2024-07-15 13:16:59.043196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.060 [2024-07-15 13:16:59.043521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.060 [2024-07-15 13:16:59.043538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.043973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.043986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.061 [2024-07-15 13:16:59.044714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.061 [2024-07-15 13:16:59.044746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.061 [2024-07-15 13:16:59.044779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.061 [2024-07-15 13:16:59.044810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.061 [2024-07-15 13:16:59.044848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.061 [2024-07-15 13:16:59.044887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.061 [2024-07-15 13:16:59.044922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.044975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.061 [2024-07-15 13:16:59.044991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.061 [2024-07-15 13:16:59.045005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.062 [2024-07-15 13:16:59.045034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.062 [2024-07-15 13:16:59.045063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.062 [2024-07-15 13:16:59.045093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.062 [2024-07-15 13:16:59.045125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.045977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.045992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.062 [2024-07-15 13:16:59.046446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.062 [2024-07-15 13:16:59.046463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.063 [2024-07-15 13:16:59.046809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.046977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.046993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.063 [2024-07-15 13:16:59.047371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.063 [2024-07-15 13:16:59.047597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23514c0 is same with the state(5) to be set 00:24:38.063 [2024-07-15 13:16:59.047631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:38.063 [2024-07-15 13:16:59.047643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:38.063 [2024-07-15 13:16:59.047656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38416 len:8 PRP1 0x0 PRP2 0x0 00:24:38.063 [2024-07-15 13:16:59.047670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047734] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23514c0 was disconnected and freed. reset controller. 00:24:38.063 [2024-07-15 13:16:59.047806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.063 [2024-07-15 13:16:59.047834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.063 [2024-07-15 13:16:59.047866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.063 [2024-07-15 13:16:59.047907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.063 [2024-07-15 13:16:59.047951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.063 [2024-07-15 13:16:59.047963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.063 [2024-07-15 13:16:59.051583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.063 [2024-07-15 13:16:59.051622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.063 [2024-07-15 13:16:59.052511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-15 13:16:59.052543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.052561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.052799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.053056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.053078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.053094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.056665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.065929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.066355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.066387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.066404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.066642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.066893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.066916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.066931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.070488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.079960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.080411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.080443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.080461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.080699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.080953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.080977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.080993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.084561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.093955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.094396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.094427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.094445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.094682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.094937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.094961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.094976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.098533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.107792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.108239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.108270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.108288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.108525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.108766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.108789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.108804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.112370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.121621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.122061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.122092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.122109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.122346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.122593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.122616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.122632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.126220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.135471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.135891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.135923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.135941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.136178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.136419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.136442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.136457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.140023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.149495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.150003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.150034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.150052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.150290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.150531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.150555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.150569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.154150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.163408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.163823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.163855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.163873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.164120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.164363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.164386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.164400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.064 [2024-07-15 13:16:59.167985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.064 [2024-07-15 13:16:59.177251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.064 [2024-07-15 13:16:59.177688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-15 13:16:59.177719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.064 [2024-07-15 13:16:59.177736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.064 [2024-07-15 13:16:59.177985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.064 [2024-07-15 13:16:59.178227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.064 [2024-07-15 13:16:59.178250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.064 [2024-07-15 13:16:59.178265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.181829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.191093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.191503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.191533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.191550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.191788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.192041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.192065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.192080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.195641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.205107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.205519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.205550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.205567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.205805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.206057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.206081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.206096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.209659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.219130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.219540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.219571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.219594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.219833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.220084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.220108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.220123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.223688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.233158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.233588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.233619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.233636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.233874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.234126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.234150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.234164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.237722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.246991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.247399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.247430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.247446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.247683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.247936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.247960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.247976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.251537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.261029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.261471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.261502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.261519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.261757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.262015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.262040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.262055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.265621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.274907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.275344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.275375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.275392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.275630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.275871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.275903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.275919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.279484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.288779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.289217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.289258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.289275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.289512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.289754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.289777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.289792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.293382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.302659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.303089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.303121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.303138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.303375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.303617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.303640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.303655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.307233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.316525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.316939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-15 13:16:59.316971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.065 [2024-07-15 13:16:59.316988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.065 [2024-07-15 13:16:59.317225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.065 [2024-07-15 13:16:59.317467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.065 [2024-07-15 13:16:59.317490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.065 [2024-07-15 13:16:59.317505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.065 [2024-07-15 13:16:59.321087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.065 [2024-07-15 13:16:59.330385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.065 [2024-07-15 13:16:59.330826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.330857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.330874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.331123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.331364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.331388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.331402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.334984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.344255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.344688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.344719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.344736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.344987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.345243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.345266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.345281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.348847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.358140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.358551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.358583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.358606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.358845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.359097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.359122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.359137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.362713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.371999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.372433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.372463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.372480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.372717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.372971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.372995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.373010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.376582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.385870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.386319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.386351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.386368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.386605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.386847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.386870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.386898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.390482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.399771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.400214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.400245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.400262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.400500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.400742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.400770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.400785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.404362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.413643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.414087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.414118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.414135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.414372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.414614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.414637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.414652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.418226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.427496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.428063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.428116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.428133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.428371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.428612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.428635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.428650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.432225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.441504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.441909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.441940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.441957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.442195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.442436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.442459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.442474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.446053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.455518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.455968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.456000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.456017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.456254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.456496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.456519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.456534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.460108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.469366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.469780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.469811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.066 [2024-07-15 13:16:59.469828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.066 [2024-07-15 13:16:59.470076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.066 [2024-07-15 13:16:59.470319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.066 [2024-07-15 13:16:59.470342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.066 [2024-07-15 13:16:59.470357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.066 [2024-07-15 13:16:59.473946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.066 [2024-07-15 13:16:59.483204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.066 [2024-07-15 13:16:59.483647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-15 13:16:59.483677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.483695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.483948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.484191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.484214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.484229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.487790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.497056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.497490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.497520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.497537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.497780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.498034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.498057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.498073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.501634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.510888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.511321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.511351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.511369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.511606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.511848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.511871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.511897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.515460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.524714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.525137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.525176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.525192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.525430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.525671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.525694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.525710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.529286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.538551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.538982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.539013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.539030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.539268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.539510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.539533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.539557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.543135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.552395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.552804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.552834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.552852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.553098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.553340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.553364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.553379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.556956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.566226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.566657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.566687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.566704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.566953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.567195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.567218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.567233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.570793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.580066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.580470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.580500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.580518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.580755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.581008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.581031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.581046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.584619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.594091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.594558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.594594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.594613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.594850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.595103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.595127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.595141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.598698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.607953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.608362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.608393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.608410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.608648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.608901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.608925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.608940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.612502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.621964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.622404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.622434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.067 [2024-07-15 13:16:59.622451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.067 [2024-07-15 13:16:59.622689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.067 [2024-07-15 13:16:59.622942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.067 [2024-07-15 13:16:59.622966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.067 [2024-07-15 13:16:59.622982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.067 [2024-07-15 13:16:59.626540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.067 [2024-07-15 13:16:59.635800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.067 [2024-07-15 13:16:59.636239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-15 13:16:59.636270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.636287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.636524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.636772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.636795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.636810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.640380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.649630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.650066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.650097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.650115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.650352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.650594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.650617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.650631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.654201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.663662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.664083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.664115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.664133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.664371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.664612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.664636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.664651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.668226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.677688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.678129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.678160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.678177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.678415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.678657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.678680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.678696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.682278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.691533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.691976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.692007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.692024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.692262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.692503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.692526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.692541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.696117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.705374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.705800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.705831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.705848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.706095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.706337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.706360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.706374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.709943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.719205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.719634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.719665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.719682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.719931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.720173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.720196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.720211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.723774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.733036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.733463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.733493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.733517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.733755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.734008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.734033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.734048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.737608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.068 [2024-07-15 13:16:59.746857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.068 [2024-07-15 13:16:59.747272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-15 13:16:59.747303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.068 [2024-07-15 13:16:59.747321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.068 [2024-07-15 13:16:59.747558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.068 [2024-07-15 13:16:59.747800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.068 [2024-07-15 13:16:59.747823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.068 [2024-07-15 13:16:59.747838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.068 [2024-07-15 13:16:59.751409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.327 [2024-07-15 13:16:59.760884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.327 [2024-07-15 13:16:59.761334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.327 [2024-07-15 13:16:59.761365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.327 [2024-07-15 13:16:59.761382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.327 [2024-07-15 13:16:59.761620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.327 [2024-07-15 13:16:59.761862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.327 [2024-07-15 13:16:59.761896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.327 [2024-07-15 13:16:59.761912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.327 [2024-07-15 13:16:59.765474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.327 [2024-07-15 13:16:59.774733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.327 [2024-07-15 13:16:59.775171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.775202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.775219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.775456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.775698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.775727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.775742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.779317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.788567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.788998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.789029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.789046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.789284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.789525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.789548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.789563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.793136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.802388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.802820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.802850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.802866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.803115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.803356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.803379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.803395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.806963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.816219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.816637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.816668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.816686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.816934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.817176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.817199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.817213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.820776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.830041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.830473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.830503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.830520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.830757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.831011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.831035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.831050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.834610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.843867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.844302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.844333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.844350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.844587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.844830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.844853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.844868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.848440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.857687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.858124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.858155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.858172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.858410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.858652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.858675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.858690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.862266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.871522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.871967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.871999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.872021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.872260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.872502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.872525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.872539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.876115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.885367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.885798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.885829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.885846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.886094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.886336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.886359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.886374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.889971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.899219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.899649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.899680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.899697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.899946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.900189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.900212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.900227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.903789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.328 [2024-07-15 13:16:59.913049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.328 [2024-07-15 13:16:59.913485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.328 [2024-07-15 13:16:59.913515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.328 [2024-07-15 13:16:59.913532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.328 [2024-07-15 13:16:59.913770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.328 [2024-07-15 13:16:59.914022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.328 [2024-07-15 13:16:59.914046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.328 [2024-07-15 13:16:59.914067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.328 [2024-07-15 13:16:59.917628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:16:59.926890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:16:59.927310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:16:59.927340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:16:59.927357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:16:59.927594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.329 [2024-07-15 13:16:59.927835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.329 [2024-07-15 13:16:59.927858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.329 [2024-07-15 13:16:59.927873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.329 [2024-07-15 13:16:59.931450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:16:59.940919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:16:59.941348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:16:59.941378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:16:59.941395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:16:59.941633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.329 [2024-07-15 13:16:59.941874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.329 [2024-07-15 13:16:59.941907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.329 [2024-07-15 13:16:59.941921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.329 [2024-07-15 13:16:59.945483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:16:59.954944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:16:59.955347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:16:59.955377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:16:59.955394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:16:59.955632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.329 [2024-07-15 13:16:59.955873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.329 [2024-07-15 13:16:59.955910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.329 [2024-07-15 13:16:59.955925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.329 [2024-07-15 13:16:59.959487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:16:59.968952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:16:59.969414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:16:59.969445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:16:59.969462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:16:59.969699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.329 [2024-07-15 13:16:59.969952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.329 [2024-07-15 13:16:59.969976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.329 [2024-07-15 13:16:59.969991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.329 [2024-07-15 13:16:59.973550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:16:59.982801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:16:59.983210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:16:59.983241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:16:59.983258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:16:59.983496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.329 [2024-07-15 13:16:59.983737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.329 [2024-07-15 13:16:59.983760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.329 [2024-07-15 13:16:59.983775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.329 [2024-07-15 13:16:59.987352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:16:59.996811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:16:59.997222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:16:59.997253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:16:59.997270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:16:59.997507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.329 [2024-07-15 13:16:59.997749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.329 [2024-07-15 13:16:59.997771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.329 [2024-07-15 13:16:59.997786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.329 [2024-07-15 13:17:00.001359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:17:00.010838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:17:00.011279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:17:00.011310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:17:00.011328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:17:00.011571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.329 [2024-07-15 13:17:00.011814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.329 [2024-07-15 13:17:00.011838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.329 [2024-07-15 13:17:00.011853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.329 [2024-07-15 13:17:00.015421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.329 [2024-07-15 13:17:00.024681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.329 [2024-07-15 13:17:00.025092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.329 [2024-07-15 13:17:00.025123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.329 [2024-07-15 13:17:00.025141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.329 [2024-07-15 13:17:00.025378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.025620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.025644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.025659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.029234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.038706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.039125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.039160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.589 [2024-07-15 13:17:00.039180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.589 [2024-07-15 13:17:00.039420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.039664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.039691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.039708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.043286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.053207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.053643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.053676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.589 [2024-07-15 13:17:00.053694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.589 [2024-07-15 13:17:00.053945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.054188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.054212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.054234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.057797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.067074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.067506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.067538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.589 [2024-07-15 13:17:00.067556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.589 [2024-07-15 13:17:00.067794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.068048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.068072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.068088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.071850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.080919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.081443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.081497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.589 [2024-07-15 13:17:00.081515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.589 [2024-07-15 13:17:00.081752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.082005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.082029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.082044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.085616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.094890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.095304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.095336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.589 [2024-07-15 13:17:00.095353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.589 [2024-07-15 13:17:00.095591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.095833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.095857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.095872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.099451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.108711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.109129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.109165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.589 [2024-07-15 13:17:00.109184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.589 [2024-07-15 13:17:00.109421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.109662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.109686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.109701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.113278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.122743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.123158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.123189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.589 [2024-07-15 13:17:00.123207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.589 [2024-07-15 13:17:00.123445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.589 [2024-07-15 13:17:00.123685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.589 [2024-07-15 13:17:00.123709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.589 [2024-07-15 13:17:00.123723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.589 [2024-07-15 13:17:00.127298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.589 [2024-07-15 13:17:00.136778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.589 [2024-07-15 13:17:00.137265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.589 [2024-07-15 13:17:00.137321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.137339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.137576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.137817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.137841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.137856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.141426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.150694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.151108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.151139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.151157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.151395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.151642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.151666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.151682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.155260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.164524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.164965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.165003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.165027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.165267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.165508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.165532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.165547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.169117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.178361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.178794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.178825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.178842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.179088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.179329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.179352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.179368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.182934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.192407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.192845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.192884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.192904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.193164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.193407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.193430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.193445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.197022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.206287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.206711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.206757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.206775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.207025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.207267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.207290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.207305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.210893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.220165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.220569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.220600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.220617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.220855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.221106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.221131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.221146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.224706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.234171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.234574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.234604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.234622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.234859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.235109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.235133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.235149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.238705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.248171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.248614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.248643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.248669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.248916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.249158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.249182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.249197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.590 [2024-07-15 13:17:00.252754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.590 [2024-07-15 13:17:00.262027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.590 [2024-07-15 13:17:00.262406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.590 [2024-07-15 13:17:00.262436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.590 [2024-07-15 13:17:00.262454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.590 [2024-07-15 13:17:00.262691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.590 [2024-07-15 13:17:00.262941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.590 [2024-07-15 13:17:00.262965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.590 [2024-07-15 13:17:00.262987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.591 [2024-07-15 13:17:00.266565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.591 [2024-07-15 13:17:00.276054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.591 [2024-07-15 13:17:00.276491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.591 [2024-07-15 13:17:00.276522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.591 [2024-07-15 13:17:00.276539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.591 [2024-07-15 13:17:00.276776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.591 [2024-07-15 13:17:00.277028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.591 [2024-07-15 13:17:00.277052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.591 [2024-07-15 13:17:00.277067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.591 [2024-07-15 13:17:00.280627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.289899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.290350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.290381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.290397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.290635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.290888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.290918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.290934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.850 [2024-07-15 13:17:00.294495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.303754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.304179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.304210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.304227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.304464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.304705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.304728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.304743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.850 [2024-07-15 13:17:00.308310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.317769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.318162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.318193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.318210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.318448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.318689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.318712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.318727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.850 [2024-07-15 13:17:00.322296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.331754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.332174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.332204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.332221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.332459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.332700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.332723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.332738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.850 [2024-07-15 13:17:00.336305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.345773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.346193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.346224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.346241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.346478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.346719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.346743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.346757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.850 [2024-07-15 13:17:00.350323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.359783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.360230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.360261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.360278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.360515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.360756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.360779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.360794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.850 [2024-07-15 13:17:00.364363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.373611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.374053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.374084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.374102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.374339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.374580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.374603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.374618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.850 [2024-07-15 13:17:00.378190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.850 [2024-07-15 13:17:00.387444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.850 [2024-07-15 13:17:00.387892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.850 [2024-07-15 13:17:00.387923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.850 [2024-07-15 13:17:00.387940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.850 [2024-07-15 13:17:00.388184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.850 [2024-07-15 13:17:00.388426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.850 [2024-07-15 13:17:00.388449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.850 [2024-07-15 13:17:00.388464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.392037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.401293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.401718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.401750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.401767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.402015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.402258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.402282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.402297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.405856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.415123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.415571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.415602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.415619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.415857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.416107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.416131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.416147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.419705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.428972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.429401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.429432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.429450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.429689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.429941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.429966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.429987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.433559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.442873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.443325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.443356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.443373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.443610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.443851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.443874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.443898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.447459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.456724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.457163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.457194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.457211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.457448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.457689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.457713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.457727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.461297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.470553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.470963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.470995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.471012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.471250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.471492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.471515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.471529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.475110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.484397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.484831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.484862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.484888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.485129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.485370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.485393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.485408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.488979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.498263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.498717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.498749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.498766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.499015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.499257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.499281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.499295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.502857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.512134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.512538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.512568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.512586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.512823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.513074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.513098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.513112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.516674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.526151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.526590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.526621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.526638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.526891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.527134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.527157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.851 [2024-07-15 13:17:00.527171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.851 [2024-07-15 13:17:00.530727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.851 [2024-07-15 13:17:00.540008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.851 [2024-07-15 13:17:00.540440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.851 [2024-07-15 13:17:00.540472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:38.851 [2024-07-15 13:17:00.540491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:38.851 [2024-07-15 13:17:00.540728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:38.851 [2024-07-15 13:17:00.540981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.851 [2024-07-15 13:17:00.541006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.852 [2024-07-15 13:17:00.541021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.852 [2024-07-15 13:17:00.544581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.110 [2024-07-15 13:17:00.553848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.110 [2024-07-15 13:17:00.554283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.110 [2024-07-15 13:17:00.554314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.110 [2024-07-15 13:17:00.554332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.110 [2024-07-15 13:17:00.554569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.110 [2024-07-15 13:17:00.554810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.110 [2024-07-15 13:17:00.554833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.110 [2024-07-15 13:17:00.554848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.110 [2024-07-15 13:17:00.558419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.110 [2024-07-15 13:17:00.567679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.110 [2024-07-15 13:17:00.568079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.110 [2024-07-15 13:17:00.568110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.110 [2024-07-15 13:17:00.568127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.110 [2024-07-15 13:17:00.568365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.568606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.568629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.568651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.572225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.581688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.582118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.582150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.582167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.582405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.582647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.582670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.582685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.586272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.595529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.595976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.596008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.596025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.596263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.596504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.596528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.596543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.600115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.609571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.609988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.610019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.610037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.610274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.610515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.610538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.610553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.614124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.623577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.623993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.624030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.624048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.624286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.624527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.624550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.624565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.628135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.637582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.638024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.638056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.638073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.638311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.638553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.638576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.638591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.642162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.651405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.651839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.651870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.651897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.652135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.652377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.652400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.652415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.655982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.665223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.665627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.665658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.665675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.665925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.666173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.666196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.666212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.669771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.111 [2024-07-15 13:17:00.679230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.111 [2024-07-15 13:17:00.679673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.111 [2024-07-15 13:17:00.679704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.111 [2024-07-15 13:17:00.679722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.111 [2024-07-15 13:17:00.679970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.111 [2024-07-15 13:17:00.680213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.111 [2024-07-15 13:17:00.680236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.111 [2024-07-15 13:17:00.680251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.111 [2024-07-15 13:17:00.683810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.693071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.693507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.693538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.693554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.693792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.694044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.694067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.694082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.697642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.706905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.707361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.707392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.707409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.707646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.707897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.707921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.707937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.711501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.720744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.721182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.721212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.721229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.721467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.721708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.721731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.721746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.725315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.734765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.735180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.735211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.735228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.735466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.735707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.735730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.735745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.739312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.748770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.749210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.749240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.749257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.749495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.749736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.749760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.749774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.753343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.762798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.763214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.763245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.763268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.763506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.763748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.763771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.763786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.767356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.776809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.777248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.777279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.777296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.777533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.777774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.777798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.777813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.781380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.790836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.791271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.791302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.791319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.791557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.791798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.791821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.791836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.112 [2024-07-15 13:17:00.795403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.112 [2024-07-15 13:17:00.804652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.112 [2024-07-15 13:17:00.805090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.112 [2024-07-15 13:17:00.805121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.112 [2024-07-15 13:17:00.805139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.112 [2024-07-15 13:17:00.805376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.112 [2024-07-15 13:17:00.805617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.112 [2024-07-15 13:17:00.805646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.112 [2024-07-15 13:17:00.805661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.809228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.818689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.819081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.819112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.819129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.819367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.819609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.819632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.819646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.823216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.832676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.833116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.833147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.833164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.833401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.833642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.833666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.833681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.837246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.846698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.847108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.847138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.847156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.847393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.847634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.847657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.847672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.851239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.860702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.861117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.861147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.861165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.861402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.861643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.861666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.861681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.865250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.874705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.875142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.875173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.875190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.875428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.875669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.875692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.875707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.879277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.888542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.888948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.888980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.888997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.889234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.889476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.889499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.889514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.893087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.902546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.902982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.903013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.903031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.903274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.903516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.903539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.903554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.907123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.916384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.916834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.916865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.916892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.917131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.917372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.917396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.917411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.920980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.930227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.930658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.930689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.930706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.930954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.931196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.931219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.931234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.934795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.944252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.944684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.372 [2024-07-15 13:17:00.944714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.372 [2024-07-15 13:17:00.944732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.372 [2024-07-15 13:17:00.944980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.372 [2024-07-15 13:17:00.945222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.372 [2024-07-15 13:17:00.945245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.372 [2024-07-15 13:17:00.945268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.372 [2024-07-15 13:17:00.948827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.372 [2024-07-15 13:17:00.958079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.372 [2024-07-15 13:17:00.958461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:00.958492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:00.958509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:00.958747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:00.958999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:00.959023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:00.959038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:00.962598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:00.972064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.373 [2024-07-15 13:17:00.972494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:00.972524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:00.972541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:00.972779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:00.973030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:00.973054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:00.973069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:00.976628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:00.986095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.373 [2024-07-15 13:17:00.986517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:00.986548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:00.986565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:00.986802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:00.987055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:00.987078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:00.987093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:00.990653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:01.000109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.373 [2024-07-15 13:17:01.000555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:01.000586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:01.000603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:01.000841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:01.001092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:01.001117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:01.001131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:01.004690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:01.013964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.373 [2024-07-15 13:17:01.014413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:01.014444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:01.014461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:01.014698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:01.014951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:01.014975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:01.014990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:01.018546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:01.027789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.373 [2024-07-15 13:17:01.028201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:01.028232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:01.028249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:01.028486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:01.028727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:01.028750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:01.028765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:01.032336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:01.041788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.373 [2024-07-15 13:17:01.042206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:01.042237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:01.042254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:01.042491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:01.042739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:01.042762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:01.042777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:01.046344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:01.055801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.373 [2024-07-15 13:17:01.056257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.373 [2024-07-15 13:17:01.056287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.373 [2024-07-15 13:17:01.056304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.373 [2024-07-15 13:17:01.056543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.373 [2024-07-15 13:17:01.056785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.373 [2024-07-15 13:17:01.056807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.373 [2024-07-15 13:17:01.056822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.373 [2024-07-15 13:17:01.060389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.373 [2024-07-15 13:17:01.069636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.632 [2024-07-15 13:17:01.070077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.632 [2024-07-15 13:17:01.070108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.632 [2024-07-15 13:17:01.070126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.632 [2024-07-15 13:17:01.070363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.632 [2024-07-15 13:17:01.070604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.070628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.070642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.074343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.083596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.083985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.084017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.084034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.084271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.084514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.084537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.084552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.088372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.097623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.098036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.098067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.098085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.098323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.098565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.098588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.098604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.102174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.111627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.112078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.112109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.112126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.112364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.112606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.112629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.112644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.116211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.125470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.125893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.125925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.125942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.126181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.126422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.126445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.126460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.130027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.139481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.139910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.139947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.139965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.140203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.140443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.140467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.140482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.144053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.153505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.153935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.153966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.153983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.154220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.154462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.154485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.154500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.158071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.167525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.167939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.167970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.167988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.168225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.168466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.168489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.168504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.172072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.181521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.181963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.181994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.182011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.182249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.182496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.182519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.182534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.186109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.195360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.195790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.195821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.195838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.196085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.196327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.196350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.196365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.199930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.209389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.209828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.209858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.209884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.210125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.210366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.633 [2024-07-15 13:17:01.210389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.633 [2024-07-15 13:17:01.210404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.633 [2024-07-15 13:17:01.213968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.633 [2024-07-15 13:17:01.223215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.633 [2024-07-15 13:17:01.223670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.633 [2024-07-15 13:17:01.223701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.633 [2024-07-15 13:17:01.223718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.633 [2024-07-15 13:17:01.223967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.633 [2024-07-15 13:17:01.224209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.224232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.224247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.227803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.634 [2024-07-15 13:17:01.237056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.634 [2024-07-15 13:17:01.237466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.634 [2024-07-15 13:17:01.237497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.634 [2024-07-15 13:17:01.237514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.634 [2024-07-15 13:17:01.237752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.634 [2024-07-15 13:17:01.238004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.238028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.238043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.241602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.634 [2024-07-15 13:17:01.251076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.634 [2024-07-15 13:17:01.251520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.634 [2024-07-15 13:17:01.251551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.634 [2024-07-15 13:17:01.251568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.634 [2024-07-15 13:17:01.251805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.634 [2024-07-15 13:17:01.252057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.252082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.252096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.255656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.634 [2024-07-15 13:17:01.264922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.634 [2024-07-15 13:17:01.265360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.634 [2024-07-15 13:17:01.265391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.634 [2024-07-15 13:17:01.265408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.634 [2024-07-15 13:17:01.265646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.634 [2024-07-15 13:17:01.265900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.265924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.265939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.269498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.634 [2024-07-15 13:17:01.278748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.634 [2024-07-15 13:17:01.279188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.634 [2024-07-15 13:17:01.279219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.634 [2024-07-15 13:17:01.279241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.634 [2024-07-15 13:17:01.279480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.634 [2024-07-15 13:17:01.279721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.279744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.279759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.283330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.634 [2024-07-15 13:17:01.292601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.634 [2024-07-15 13:17:01.293015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.634 [2024-07-15 13:17:01.293047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.634 [2024-07-15 13:17:01.293064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.634 [2024-07-15 13:17:01.293302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.634 [2024-07-15 13:17:01.293544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.293567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.293582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.297150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.634 [2024-07-15 13:17:01.306615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.634 [2024-07-15 13:17:01.307030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.634 [2024-07-15 13:17:01.307061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.634 [2024-07-15 13:17:01.307079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.634 [2024-07-15 13:17:01.307317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.634 [2024-07-15 13:17:01.307559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.307582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.307597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.311172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.634 [2024-07-15 13:17:01.320637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.634 [2024-07-15 13:17:01.321090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.634 [2024-07-15 13:17:01.321121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.634 [2024-07-15 13:17:01.321139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.634 [2024-07-15 13:17:01.321377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.634 [2024-07-15 13:17:01.321620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.634 [2024-07-15 13:17:01.321648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.634 [2024-07-15 13:17:01.321664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.634 [2024-07-15 13:17:01.325236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.334499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.894 [2024-07-15 13:17:01.334908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.894 [2024-07-15 13:17:01.334940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.894 [2024-07-15 13:17:01.334958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.894 [2024-07-15 13:17:01.335195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.894 [2024-07-15 13:17:01.335437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.894 [2024-07-15 13:17:01.335460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.894 [2024-07-15 13:17:01.335475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.894 [2024-07-15 13:17:01.339065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.348334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.894 [2024-07-15 13:17:01.348747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.894 [2024-07-15 13:17:01.348778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.894 [2024-07-15 13:17:01.348795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.894 [2024-07-15 13:17:01.349047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.894 [2024-07-15 13:17:01.349289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.894 [2024-07-15 13:17:01.349313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.894 [2024-07-15 13:17:01.349328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.894 [2024-07-15 13:17:01.352903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.362175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.894 [2024-07-15 13:17:01.362591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.894 [2024-07-15 13:17:01.362621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.894 [2024-07-15 13:17:01.362639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.894 [2024-07-15 13:17:01.362888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.894 [2024-07-15 13:17:01.363130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.894 [2024-07-15 13:17:01.363154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.894 [2024-07-15 13:17:01.363169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.894 [2024-07-15 13:17:01.366751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.376035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.894 [2024-07-15 13:17:01.376470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.894 [2024-07-15 13:17:01.376501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.894 [2024-07-15 13:17:01.376518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.894 [2024-07-15 13:17:01.376755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.894 [2024-07-15 13:17:01.377010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.894 [2024-07-15 13:17:01.377034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.894 [2024-07-15 13:17:01.377049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.894 [2024-07-15 13:17:01.380614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.389892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.894 [2024-07-15 13:17:01.390338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.894 [2024-07-15 13:17:01.390368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.894 [2024-07-15 13:17:01.390385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.894 [2024-07-15 13:17:01.390623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.894 [2024-07-15 13:17:01.390864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.894 [2024-07-15 13:17:01.390898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.894 [2024-07-15 13:17:01.390914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.894 [2024-07-15 13:17:01.394508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.403768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.894 [2024-07-15 13:17:01.404217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.894 [2024-07-15 13:17:01.404247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.894 [2024-07-15 13:17:01.404265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.894 [2024-07-15 13:17:01.404503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.894 [2024-07-15 13:17:01.404745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.894 [2024-07-15 13:17:01.404769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.894 [2024-07-15 13:17:01.404784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.894 [2024-07-15 13:17:01.408349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.417598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.894 [2024-07-15 13:17:01.418038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.894 [2024-07-15 13:17:01.418069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.894 [2024-07-15 13:17:01.418087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.894 [2024-07-15 13:17:01.418331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.894 [2024-07-15 13:17:01.418572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.894 [2024-07-15 13:17:01.418595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.894 [2024-07-15 13:17:01.418610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.894 [2024-07-15 13:17:01.422180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.894 [2024-07-15 13:17:01.431449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.431867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.431904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.431923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.895 [2024-07-15 13:17:01.432160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.895 [2024-07-15 13:17:01.432401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.895 [2024-07-15 13:17:01.432424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.895 [2024-07-15 13:17:01.432439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.895 [2024-07-15 13:17:01.436019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.895 [2024-07-15 13:17:01.445308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.445760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.445791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.445808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.895 [2024-07-15 13:17:01.446057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.895 [2024-07-15 13:17:01.446299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.895 [2024-07-15 13:17:01.446322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.895 [2024-07-15 13:17:01.446337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.895 [2024-07-15 13:17:01.449909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.895 [2024-07-15 13:17:01.459181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.459616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.459647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.459664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.895 [2024-07-15 13:17:01.459911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.895 [2024-07-15 13:17:01.460154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.895 [2024-07-15 13:17:01.460177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.895 [2024-07-15 13:17:01.460198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.895 [2024-07-15 13:17:01.463756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.895 [2024-07-15 13:17:01.473025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.473469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.473500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.473518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.895 [2024-07-15 13:17:01.473755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.895 [2024-07-15 13:17:01.474012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.895 [2024-07-15 13:17:01.474036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.895 [2024-07-15 13:17:01.474052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.895 [2024-07-15 13:17:01.477614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.895 [2024-07-15 13:17:01.486899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.487337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.487368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.487385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.895 [2024-07-15 13:17:01.487623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.895 [2024-07-15 13:17:01.487864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.895 [2024-07-15 13:17:01.487897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.895 [2024-07-15 13:17:01.487913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.895 [2024-07-15 13:17:01.491481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.895 [2024-07-15 13:17:01.500743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.501289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.501357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.501374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.895 [2024-07-15 13:17:01.501612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.895 [2024-07-15 13:17:01.501853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.895 [2024-07-15 13:17:01.501884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.895 [2024-07-15 13:17:01.501902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.895 [2024-07-15 13:17:01.505468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.895 [2024-07-15 13:17:01.514728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.515266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.515330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.515348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.895 [2024-07-15 13:17:01.515586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.895 [2024-07-15 13:17:01.515828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.895 [2024-07-15 13:17:01.515851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.895 [2024-07-15 13:17:01.515865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.895 [2024-07-15 13:17:01.519436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.895 [2024-07-15 13:17:01.528715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.895 [2024-07-15 13:17:01.529144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.895 [2024-07-15 13:17:01.529175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.895 [2024-07-15 13:17:01.529192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.896 [2024-07-15 13:17:01.529431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.896 [2024-07-15 13:17:01.529672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.896 [2024-07-15 13:17:01.529696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.896 [2024-07-15 13:17:01.529711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.896 [2024-07-15 13:17:01.533283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.896 [2024-07-15 13:17:01.542551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.896 [2024-07-15 13:17:01.542981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.896 [2024-07-15 13:17:01.543012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.896 [2024-07-15 13:17:01.543030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.896 [2024-07-15 13:17:01.543268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.896 [2024-07-15 13:17:01.543510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.896 [2024-07-15 13:17:01.543533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.896 [2024-07-15 13:17:01.543547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.896 [2024-07-15 13:17:01.547116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.896 [2024-07-15 13:17:01.556384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.896 [2024-07-15 13:17:01.556818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.896 [2024-07-15 13:17:01.556849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.896 [2024-07-15 13:17:01.556866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.896 [2024-07-15 13:17:01.557113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.896 [2024-07-15 13:17:01.557362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.896 [2024-07-15 13:17:01.557385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.896 [2024-07-15 13:17:01.557400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.896 [2024-07-15 13:17:01.560972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.896 [2024-07-15 13:17:01.570242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.896 [2024-07-15 13:17:01.570741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.896 [2024-07-15 13:17:01.570773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.896 [2024-07-15 13:17:01.570790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.896 [2024-07-15 13:17:01.571039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.896 [2024-07-15 13:17:01.571281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.896 [2024-07-15 13:17:01.571305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.896 [2024-07-15 13:17:01.571320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.896 [2024-07-15 13:17:01.574896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.896 [2024-07-15 13:17:01.584187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.896 [2024-07-15 13:17:01.584642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.896 [2024-07-15 13:17:01.584675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:39.896 [2024-07-15 13:17:01.584692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:39.896 [2024-07-15 13:17:01.584941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:39.896 [2024-07-15 13:17:01.585183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.896 [2024-07-15 13:17:01.585206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.896 [2024-07-15 13:17:01.585221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.896 [2024-07-15 13:17:01.588788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.156 [2024-07-15 13:17:01.598081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.156 [2024-07-15 13:17:01.598484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.156 [2024-07-15 13:17:01.598514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.156 [2024-07-15 13:17:01.598531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.156 [2024-07-15 13:17:01.598775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.156 [2024-07-15 13:17:01.599030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.156 [2024-07-15 13:17:01.599054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.156 [2024-07-15 13:17:01.599069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.156 [2024-07-15 13:17:01.602642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.156 [2024-07-15 13:17:01.611940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.156 [2024-07-15 13:17:01.612347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.156 [2024-07-15 13:17:01.612378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.156 [2024-07-15 13:17:01.612395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.156 [2024-07-15 13:17:01.612633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.156 [2024-07-15 13:17:01.612875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.156 [2024-07-15 13:17:01.612918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.156 [2024-07-15 13:17:01.612933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.156 [2024-07-15 13:17:01.616503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.156 [2024-07-15 13:17:01.625772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.156 [2024-07-15 13:17:01.626187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.156 [2024-07-15 13:17:01.626218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.156 [2024-07-15 13:17:01.626235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.156 [2024-07-15 13:17:01.626473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.156 [2024-07-15 13:17:01.626714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.156 [2024-07-15 13:17:01.626738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.156 [2024-07-15 13:17:01.626753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.156 [2024-07-15 13:17:01.630331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.156 [2024-07-15 13:17:01.639616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.156 [2024-07-15 13:17:01.640066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.156 [2024-07-15 13:17:01.640096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.156 [2024-07-15 13:17:01.640114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.156 [2024-07-15 13:17:01.640352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.156 [2024-07-15 13:17:01.640593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.156 [2024-07-15 13:17:01.640616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.156 [2024-07-15 13:17:01.640631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.156 [2024-07-15 13:17:01.644211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.156 [2024-07-15 13:17:01.653476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.156 [2024-07-15 13:17:01.653890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.156 [2024-07-15 13:17:01.653921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.156 [2024-07-15 13:17:01.653946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.156 [2024-07-15 13:17:01.654186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.156 [2024-07-15 13:17:01.654428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.156 [2024-07-15 13:17:01.654451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.156 [2024-07-15 13:17:01.654466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.156 [2024-07-15 13:17:01.658056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.156 [2024-07-15 13:17:01.667333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.156 [2024-07-15 13:17:01.667837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.156 [2024-07-15 13:17:01.667868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.156 [2024-07-15 13:17:01.667894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.156 [2024-07-15 13:17:01.668132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.156 [2024-07-15 13:17:01.668384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.156 [2024-07-15 13:17:01.668407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.156 [2024-07-15 13:17:01.668421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.156 [2024-07-15 13:17:01.672009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.156 [2024-07-15 13:17:01.681289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.156 [2024-07-15 13:17:01.681774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.156 [2024-07-15 13:17:01.681822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.156 [2024-07-15 13:17:01.681839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.156 [2024-07-15 13:17:01.682085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.156 [2024-07-15 13:17:01.682327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.682350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.682365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.685947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.695232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.695677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.695707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.695725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.695973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.696216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.696245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.696261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.699832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.709113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.709514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.709544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.709561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.709798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.710050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.710074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.710089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.713655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.723133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.723547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.723577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.723594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.723831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.724082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.724106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.724121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.727684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.737166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.737691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.737739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.737756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.738013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.738255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.738279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.738294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.741857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.751133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.751578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.751609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.751626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.751863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.752116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.752140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.752155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.755715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.764978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.765392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.765422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.765439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.765677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.765930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.765954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.765969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.769528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.778998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.779428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.779459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.779476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.779713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.779968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.779992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.780006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.783568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.792830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.793266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.793297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.793319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.793557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.793799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.793822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.793837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.797408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.806657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.807070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.807101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.807119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.807356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.807598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.807621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.807636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.811207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.820672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.821090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.821121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.821139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.821376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.157 [2024-07-15 13:17:01.821617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.157 [2024-07-15 13:17:01.821641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.157 [2024-07-15 13:17:01.821656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.157 [2024-07-15 13:17:01.825226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.157 [2024-07-15 13:17:01.834686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.157 [2024-07-15 13:17:01.835098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.157 [2024-07-15 13:17:01.835129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.157 [2024-07-15 13:17:01.835146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.157 [2024-07-15 13:17:01.835384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.158 [2024-07-15 13:17:01.835625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.158 [2024-07-15 13:17:01.835653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.158 [2024-07-15 13:17:01.835669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.158 [2024-07-15 13:17:01.839243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.158 [2024-07-15 13:17:01.848697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.158 [2024-07-15 13:17:01.849153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.158 [2024-07-15 13:17:01.849184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.158 [2024-07-15 13:17:01.849201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.158 [2024-07-15 13:17:01.849439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.158 [2024-07-15 13:17:01.849680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.158 [2024-07-15 13:17:01.849703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.158 [2024-07-15 13:17:01.849718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.158 [2024-07-15 13:17:01.853294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.416 [2024-07-15 13:17:01.862556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.416 [2024-07-15 13:17:01.862988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.416 [2024-07-15 13:17:01.863019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.416 [2024-07-15 13:17:01.863037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.416 [2024-07-15 13:17:01.863275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.416 [2024-07-15 13:17:01.863516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.416 [2024-07-15 13:17:01.863539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.416 [2024-07-15 13:17:01.863554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.416 [2024-07-15 13:17:01.867126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.416 [2024-07-15 13:17:01.876381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.416 [2024-07-15 13:17:01.876785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.416 [2024-07-15 13:17:01.876816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.416 [2024-07-15 13:17:01.876834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.416 [2024-07-15 13:17:01.877082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.416 [2024-07-15 13:17:01.877324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.416 [2024-07-15 13:17:01.877348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.416 [2024-07-15 13:17:01.877362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.416 [2024-07-15 13:17:01.880932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.416 [2024-07-15 13:17:01.890403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.416 [2024-07-15 13:17:01.890839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.416 [2024-07-15 13:17:01.890869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.416 [2024-07-15 13:17:01.890898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.416 [2024-07-15 13:17:01.891137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.416 [2024-07-15 13:17:01.891378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.416 [2024-07-15 13:17:01.891401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.416 [2024-07-15 13:17:01.891417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.416 [2024-07-15 13:17:01.894986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.416 [2024-07-15 13:17:01.904237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.416 [2024-07-15 13:17:01.904648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.416 [2024-07-15 13:17:01.904679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.416 [2024-07-15 13:17:01.904697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.416 [2024-07-15 13:17:01.904945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.416 [2024-07-15 13:17:01.905187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.416 [2024-07-15 13:17:01.905211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.416 [2024-07-15 13:17:01.905225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.416 [2024-07-15 13:17:01.908784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.416 [2024-07-15 13:17:01.918255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.416 [2024-07-15 13:17:01.918689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.416 [2024-07-15 13:17:01.918719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.416 [2024-07-15 13:17:01.918736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.416 [2024-07-15 13:17:01.918985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.416 [2024-07-15 13:17:01.919227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.416 [2024-07-15 13:17:01.919250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.416 [2024-07-15 13:17:01.919265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.416 [2024-07-15 13:17:01.922826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.416 [2024-07-15 13:17:01.932086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.416 [2024-07-15 13:17:01.932494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.416 [2024-07-15 13:17:01.932525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.416 [2024-07-15 13:17:01.932542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.416 [2024-07-15 13:17:01.932785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.416 [2024-07-15 13:17:01.933038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.416 [2024-07-15 13:17:01.933062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.416 [2024-07-15 13:17:01.933077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.416 [2024-07-15 13:17:01.936636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.416 [2024-07-15 13:17:01.946104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.416 [2024-07-15 13:17:01.946544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:01.946575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:01.946591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:01.946828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:01.947079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:01.947104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:01.947119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:01.950679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:01.959941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:01.960344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:01.960375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:01.960392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:01.960629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:01.960870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:01.960904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:01.960919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:01.964481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:01.973948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:01.974373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:01.974404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:01.974421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:01.974658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:01.974912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:01.974935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:01.974956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:01.978516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:01.987775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:01.988192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:01.988224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:01.988242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:01.988479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:01.988721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:01.988745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:01.988760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:01.992340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:02.001802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:02.002248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:02.002281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:02.002299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:02.002537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:02.002778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:02.002802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:02.002817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:02.006389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:02.015640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:02.016050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:02.016080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:02.016098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:02.016335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:02.016576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:02.016599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:02.016614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:02.020184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:02.029643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:02.030082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:02.030118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:02.030136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:02.030374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:02.030615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:02.030638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:02.030653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:02.034227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3933009 Killed "${NVMF_APP[@]}" "$@" 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3934095 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3934095 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3934095 ']' 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.417 [2024-07-15 13:17:02.043498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.417 [2024-07-15 13:17:02.044290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:02.044323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:02.044342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:02.044582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:02.044828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:02.044851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:02.044867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:02.048445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:02.057500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:02.057912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:02.057945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:02.057974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:02.058214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:02.058456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.417 [2024-07-15 13:17:02.058479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.417 [2024-07-15 13:17:02.058494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.417 [2024-07-15 13:17:02.062065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.417 [2024-07-15 13:17:02.071528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.417 [2024-07-15 13:17:02.071960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.417 [2024-07-15 13:17:02.071991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.417 [2024-07-15 13:17:02.072009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.417 [2024-07-15 13:17:02.072247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.417 [2024-07-15 13:17:02.072489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.418 [2024-07-15 13:17:02.072512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.418 [2024-07-15 13:17:02.072527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.418 [2024-07-15 13:17:02.076095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.418 [2024-07-15 13:17:02.085566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.418 [2024-07-15 13:17:02.085999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.418 [2024-07-15 13:17:02.086030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.418 [2024-07-15 13:17:02.086048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.418 [2024-07-15 13:17:02.086286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.418 [2024-07-15 13:17:02.086527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.418 [2024-07-15 13:17:02.086551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.418 [2024-07-15 13:17:02.086566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.418 [2024-07-15 13:17:02.090135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.418 [2024-07-15 13:17:02.093014] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:40.418 [2024-07-15 13:17:02.093098] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.418 [2024-07-15 13:17:02.099788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.418 [2024-07-15 13:17:02.100221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.418 [2024-07-15 13:17:02.100253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.418 [2024-07-15 13:17:02.100277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.418 [2024-07-15 13:17:02.100517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.418 [2024-07-15 13:17:02.100759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.418 [2024-07-15 13:17:02.100782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.418 [2024-07-15 13:17:02.100797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.418 [2024-07-15 13:17:02.104367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.418 [2024-07-15 13:17:02.113819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.418 [2024-07-15 13:17:02.114239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.418 [2024-07-15 13:17:02.114272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.418 [2024-07-15 13:17:02.114290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.418 [2024-07-15 13:17:02.114528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.677 [2024-07-15 13:17:02.114770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.677 [2024-07-15 13:17:02.114793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.677 [2024-07-15 13:17:02.114808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.677 [2024-07-15 13:17:02.118375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.677 [2024-07-15 13:17:02.127839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.677 [2024-07-15 13:17:02.128286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.677 [2024-07-15 13:17:02.128317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.677 [2024-07-15 13:17:02.128334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.677 [2024-07-15 13:17:02.128572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.677 [2024-07-15 13:17:02.128814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.677 [2024-07-15 13:17:02.128838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.677 [2024-07-15 13:17:02.128853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.677 [2024-07-15 13:17:02.132424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.677 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.677 [2024-07-15 13:17:02.141687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.677 [2024-07-15 13:17:02.142141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.677 [2024-07-15 13:17:02.142172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.677 [2024-07-15 13:17:02.142190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.677 [2024-07-15 13:17:02.142428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.677 [2024-07-15 13:17:02.142670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.677 [2024-07-15 13:17:02.142700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.677 [2024-07-15 13:17:02.142716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.677 [2024-07-15 13:17:02.146285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.677 [2024-07-15 13:17:02.155544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.677 [2024-07-15 13:17:02.155955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.677 [2024-07-15 13:17:02.155987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.677 [2024-07-15 13:17:02.156004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.677 [2024-07-15 13:17:02.156241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.677 [2024-07-15 13:17:02.156482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.677 [2024-07-15 13:17:02.156506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.156520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.160097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.169388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.169804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.169835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.169853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.170100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.170343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.170366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.170381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.173947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.174128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:40.678 [2024-07-15 13:17:02.183431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.184055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.184100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.184121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.184369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.184615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.184639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.184656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.188255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.197311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.197853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.197900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.197921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.198167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.198412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.198436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.198454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.202018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.211265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.211690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.211720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.211738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.211987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.212229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.212252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.212268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.215828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.225089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.225536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.225567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.225584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.225822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.226075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.226099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.226114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.229674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.238931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.239402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.239435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.239463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.239703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.239956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.239980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.239995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.243552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.252723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.253334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.253373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.253393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.253642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.253850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.253894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.253912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.257098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.266317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.266745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.266773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.266789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.267014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.267259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.267279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.267294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.270374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.279750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.280148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.280176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.280206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.280441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.280639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.280665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.280679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.283762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.678 [2024-07-15 13:17:02.293083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.678 [2024-07-15 13:17:02.293457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.678 [2024-07-15 13:17:02.293485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.678 [2024-07-15 13:17:02.293501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.678 [2024-07-15 13:17:02.293737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.678 [2024-07-15 13:17:02.293980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.678 [2024-07-15 13:17:02.294002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.678 [2024-07-15 13:17:02.294015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.678 [2024-07-15 13:17:02.296274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.678 [2024-07-15 13:17:02.296308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.678 [2024-07-15 13:17:02.296329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.678 [2024-07-15 13:17:02.296350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.678 [2024-07-15 13:17:02.296366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.678 [2024-07-15 13:17:02.296425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.678 [2024-07-15 13:17:02.296455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.679 [2024-07-15 13:17:02.296462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.679 [2024-07-15 13:17:02.297075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.679 [2024-07-15 13:17:02.306641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.679 [2024-07-15 13:17:02.307226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.679 [2024-07-15 13:17:02.307266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.679 [2024-07-15 13:17:02.307284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.679 [2024-07-15 13:17:02.307531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.679 [2024-07-15 13:17:02.307741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.679 [2024-07-15 13:17:02.307761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.679 [2024-07-15 13:17:02.307776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.679 [2024-07-15 13:17:02.311052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.679 [2024-07-15 13:17:02.320187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.679 [2024-07-15 13:17:02.320734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.679 [2024-07-15 13:17:02.320774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.679 [2024-07-15 13:17:02.320802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.679 [2024-07-15 13:17:02.321063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.679 [2024-07-15 13:17:02.321309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.679 [2024-07-15 13:17:02.321330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.679 [2024-07-15 13:17:02.321345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.679 [2024-07-15 13:17:02.324563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.679 [2024-07-15 13:17:02.333778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.679 [2024-07-15 13:17:02.334372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.679 [2024-07-15 13:17:02.334412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.679 [2024-07-15 13:17:02.334432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.679 [2024-07-15 13:17:02.334682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.679 [2024-07-15 13:17:02.334919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.679 [2024-07-15 13:17:02.334941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.679 [2024-07-15 13:17:02.334957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.679 [2024-07-15 13:17:02.338150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.679 [2024-07-15 13:17:02.347302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.679 [2024-07-15 13:17:02.347893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.679 [2024-07-15 13:17:02.347932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.679 [2024-07-15 13:17:02.347952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.679 [2024-07-15 13:17:02.348189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.679 [2024-07-15 13:17:02.348413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.679 [2024-07-15 13:17:02.348433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.679 [2024-07-15 13:17:02.348449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.679 [2024-07-15 13:17:02.351580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.679 [2024-07-15 13:17:02.360736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.679 [2024-07-15 13:17:02.361343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.679 [2024-07-15 13:17:02.361384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.679 [2024-07-15 13:17:02.361403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.679 [2024-07-15 13:17:02.361654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.679 [2024-07-15 13:17:02.361861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.679 [2024-07-15 13:17:02.361918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.679 [2024-07-15 13:17:02.361935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.679 [2024-07-15 13:17:02.365111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.679 [2024-07-15 13:17:02.374446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.679 [2024-07-15 13:17:02.375010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.679 [2024-07-15 13:17:02.375050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.679 [2024-07-15 13:17:02.375068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.679 [2024-07-15 13:17:02.375290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.938 [2024-07-15 13:17:02.375510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.938 [2024-07-15 13:17:02.375531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.938 [2024-07-15 13:17:02.375547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.938 [2024-07-15 13:17:02.378714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.938 [2024-07-15 13:17:02.388018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.938 [2024-07-15 13:17:02.388441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.938 [2024-07-15 13:17:02.388469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.938 [2024-07-15 13:17:02.388485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.938 [2024-07-15 13:17:02.388729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.938 [2024-07-15 13:17:02.388962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.938 [2024-07-15 13:17:02.388983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.938 [2024-07-15 13:17:02.388996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.938 [2024-07-15 13:17:02.392134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.938 [2024-07-15 13:17:02.401657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.938 [2024-07-15 13:17:02.402050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.938 [2024-07-15 13:17:02.402078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.938 [2024-07-15 13:17:02.402094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.938 [2024-07-15 13:17:02.402323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.938 [2024-07-15 13:17:02.402536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.938 [2024-07-15 13:17:02.402556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.938 [2024-07-15 13:17:02.402569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.938 [2024-07-15 13:17:02.405795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.938 [2024-07-15 13:17:02.415140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.938 [2024-07-15 13:17:02.415552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.938 [2024-07-15 13:17:02.415581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.938 [2024-07-15 13:17:02.415597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.938 [2024-07-15 13:17:02.415826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.938 [2024-07-15 13:17:02.416067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.938 [2024-07-15 13:17:02.416089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.938 [2024-07-15 13:17:02.416104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.938 [2024-07-15 13:17:02.419369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.938 [2024-07-15 13:17:02.428706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.938 [2024-07-15 13:17:02.429094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.938 [2024-07-15 13:17:02.429123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.938 [2024-07-15 13:17:02.429139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.938 [2024-07-15 13:17:02.429380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.938 [2024-07-15 13:17:02.429584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.938 [2024-07-15 13:17:02.429605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.938 [2024-07-15 13:17:02.429617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.938 [2024-07-15 13:17:02.432837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.938 [2024-07-15 13:17:02.434057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.938 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.938 [2024-07-15 13:17:02.442293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.938 [2024-07-15 13:17:02.442699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.938 [2024-07-15 13:17:02.442726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.938 [2024-07-15 13:17:02.442746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.938 [2024-07-15 13:17:02.442970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.938 [2024-07-15 13:17:02.443203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.938 [2024-07-15 13:17:02.443237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.938 [2024-07-15 13:17:02.443250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.938 [2024-07-15 13:17:02.446463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.938 [2024-07-15 13:17:02.455826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.938 [2024-07-15 13:17:02.456268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.938 [2024-07-15 13:17:02.456298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.939 [2024-07-15 13:17:02.456314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.939 [2024-07-15 13:17:02.456556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.939 [2024-07-15 13:17:02.456761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.939 [2024-07-15 13:17:02.456781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.939 [2024-07-15 13:17:02.456794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.939 [2024-07-15 13:17:02.459987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.939 [2024-07-15 13:17:02.469353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.939 [2024-07-15 13:17:02.469988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.939 [2024-07-15 13:17:02.470026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.939 [2024-07-15 13:17:02.470045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.939 [2024-07-15 13:17:02.470294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.939 [2024-07-15 13:17:02.470503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.939 [2024-07-15 13:17:02.470523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.939 [2024-07-15 13:17:02.470538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.939 Malloc0 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.939 [2024-07-15 13:17:02.473785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.939 [2024-07-15 13:17:02.483026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.939 [2024-07-15 13:17:02.483444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.939 [2024-07-15 13:17:02.483472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2120ac0 with addr=10.0.0.2, port=4420 00:24:40.939 [2024-07-15 13:17:02.483488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120ac0 is same with the state(5) to be set 00:24:40.939 [2024-07-15 13:17:02.483717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2120ac0 (9): Bad file descriptor 00:24:40.939 [2024-07-15 13:17:02.483979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.939 [2024-07-15 13:17:02.484002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.939 [2024-07-15 13:17:02.484015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.939 [2024-07-15 13:17:02.487291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.939 [2024-07-15 13:17:02.492346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.939 13:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3933428 00:24:40.939 [2024-07-15 13:17:02.496638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.939 [2024-07-15 13:17:02.574971] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.902 00:24:50.902 Latency(us) 00:24:50.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:50.902 Verification LBA range: start 0x0 length 0x4000 00:24:50.902 Nvme1n1 : 15.00 6178.86 24.14 8714.22 0.00 8566.85 1019.45 18447.17 00:24:50.902 =================================================================================================================== 00:24:50.902 Total : 6178.86 24.14 8714.22 0.00 8566.85 1019.45 18447.17 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.902 rmmod nvme_tcp 00:24:50.902 rmmod nvme_fabrics 00:24:50.902 rmmod nvme_keyring 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3934095 ']' 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3934095 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3934095 ']' 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3934095 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3934095 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3934095' 00:24:50.902 killing process with pid 3934095 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3934095 00:24:50.902 13:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3934095 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.902 13:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.807 13:17:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.807 00:24:52.807 real 0m23.298s 00:24:52.807 user 1m3.181s 00:24:52.807 sys 0m4.160s 00:24:52.807 13:17:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:52.807 13:17:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.807 ************************************ 00:24:52.807 END TEST nvmf_bdevperf 00:24:52.807 ************************************ 00:24:52.807 13:17:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:52.807 13:17:14 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:52.807 13:17:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:52.807 13:17:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.807 13:17:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.807 ************************************ 00:24:52.807 START TEST nvmf_target_disconnect 00:24:52.807 ************************************ 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:52.807 * Looking for test storage... 00:24:52.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:52.807 13:17:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.808 13:17:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.718 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.719 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:24:54.719 00:24:54.719 --- 10.0.0.2 ping statistics --- 00:24:54.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.719 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:24:54.719 00:24:54.719 --- 10.0.0.1 ping statistics --- 00:24:54.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.719 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:54.719 13:17:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:55.013 ************************************ 00:24:55.013 START TEST nvmf_target_disconnect_tc1 00:24:55.013 ************************************ 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.013 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.013 [2024-07-15 13:17:16.522222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.013 [2024-07-15 13:17:16.522291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a71a0 with addr=10.0.0.2, port=4420 00:24:55.013 [2024-07-15 13:17:16.522331] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:55.013 [2024-07-15 13:17:16.522353] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:55.013 [2024-07-15 13:17:16.522367] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:55.013 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:55.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:55.013 Initializing NVMe Controllers 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:55.013 00:24:55.013 real 0m0.094s 00:24:55.013 user 0m0.035s 00:24:55.013 sys 0m0.060s 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:55.013 ************************************ 00:24:55.013 END TEST nvmf_target_disconnect_tc1 00:24:55.013 ************************************ 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:55.013 ************************************ 00:24:55.013 START TEST nvmf_target_disconnect_tc2 00:24:55.013 ************************************ 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3937244 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3937244 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3937244 ']' 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.013 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.013 [2024-07-15 13:17:16.633796] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:55.013 [2024-07-15 13:17:16.633897] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.014 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.014 [2024-07-15 13:17:16.703837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.272 [2024-07-15 13:17:16.818927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.272 [2024-07-15 13:17:16.819000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.272 [2024-07-15 13:17:16.819029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.272 [2024-07-15 13:17:16.819040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.272 [2024-07-15 13:17:16.819050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.272 [2024-07-15 13:17:16.819208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:55.272 [2024-07-15 13:17:16.819261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:55.272 [2024-07-15 13:17:16.819298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:55.272 [2024-07-15 13:17:16.819302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.272 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.272 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:55.272 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.272 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.272 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.531 Malloc0 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.531 13:17:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.531 [2024-07-15 13:17:17.003036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.531 [2024-07-15 13:17:17.031317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3937273 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:55.531 13:17:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.531 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.438 13:17:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3937244 00:24:57.438 13:17:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Write completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Write completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Write completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Write completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.438 Read completed with error (sct=0, sc=8) 00:24:57.438 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 [2024-07-15 13:17:19.056674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 [2024-07-15 13:17:19.057078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Read completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.439 Write completed with error (sct=0, sc=8) 00:24:57.439 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 [2024-07-15 13:17:19.057376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Write completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 Read completed with error (sct=0, sc=8) 00:24:57.440 starting I/O failed 00:24:57.440 [2024-07-15 13:17:19.057722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:57.440 [2024-07-15 13:17:19.058016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.058052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.058245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.058277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.058470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.058496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.058702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.058731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.058924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.058952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.059090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.059116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.059311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.059337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.059519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.059544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.059721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.059763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.059941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.059967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.060108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.060133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.060286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.060312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.060473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.060497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.060657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.060682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.060864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.060899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.061025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.061052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.061189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.061214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.061374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.061400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.061578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.061621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.440 qpair failed and we were unable to recover it. 00:24:57.440 [2024-07-15 13:17:19.061808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.440 [2024-07-15 13:17:19.061833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.061996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.062022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.062148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.062173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.062329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.062354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.062509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.062537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.062692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.062716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.062903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.062928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.063061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.063087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.063219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.063245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.063402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.063427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.063579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.063603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.063738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.063764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.063931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.063957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.064086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.064110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.064232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.064265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.064425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.064450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.064608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.064633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.064768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.064794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.064961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.064993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.065124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.065148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.065275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.065300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.065459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.065485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.065693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.065718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.065904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.065933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.066075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.066101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.066279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.066322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.066500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.066544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.066689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.066715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.066907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.066933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.067063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.067089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.067282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.067325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.067565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.067591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.441 qpair failed and we were unable to recover it. 00:24:57.441 [2024-07-15 13:17:19.067715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.441 [2024-07-15 13:17:19.067741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.067904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.067930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.068064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.068089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.068222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.068248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.068376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.068404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.068535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.068562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.068790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.068816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.068966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.068992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.069141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.069166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.069316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.069358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.069568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.069611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.069770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.069797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.069927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.069954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.070124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.070150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.070355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.070398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.070561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.070587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.070719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.070745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.070869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.070903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.071038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.071064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.071190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.071216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.071371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.071396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.071553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.071578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.071733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.071762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.071918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.071944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.072112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.072137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.072263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.072289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.072441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.072467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.072660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.072700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.072861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.072902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.073067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.073093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.073249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.073291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.073516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.073557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.073736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.073781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.073942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.073970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.442 qpair failed and we were unable to recover it. 00:24:57.442 [2024-07-15 13:17:19.074098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.442 [2024-07-15 13:17:19.074124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.074330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.074356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.074566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.074592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.074770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.074796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.074976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.075002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.075127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.075153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.075337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.075363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.075521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.075547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.075819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.075848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.076006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.076033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.076218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.076244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.076425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.076451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.076573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.076598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.076748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.076774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.077001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.077042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.077222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.077248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.077478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.077507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.077676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.077706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.077901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.077944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.078125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.078151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.078306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.078331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.078492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.078518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.078683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.078709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.078857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.078888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.079018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.079044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.079227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.079253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.079401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.079427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.079578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.079603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.079733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.079763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.079920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.079946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.080103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.080128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.080285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.080311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.080489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.080514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.443 qpair failed and we were unable to recover it. 00:24:57.443 [2024-07-15 13:17:19.080650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.443 [2024-07-15 13:17:19.080690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.080958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.080984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.081263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.081303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.081480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.081505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.081743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.081785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.081963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.081990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.082154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.082194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.082355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.082381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.082631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.082660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.082863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.082899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.083075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.083101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.083262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.083287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.083419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.083445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.083636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.083662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.083843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.083869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.084037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.084064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.084221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.084262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.084491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.084517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.084718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.084744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.084941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.084967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.085123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.085149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.085285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.085311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.085474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.085500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.085684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.085710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.085886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.085913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.086043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.444 [2024-07-15 13:17:19.086068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.444 qpair failed and we were unable to recover it. 00:24:57.444 [2024-07-15 13:17:19.086201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.086228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.086411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.086437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.086571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.086597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.086723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.086750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.086903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.086930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.087087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.087113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.087267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.087293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.087528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.087553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.087676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.087702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.087856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.087893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.088077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.088102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.088224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.088250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.088403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.088428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.088585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.088610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.088792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.088818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.088974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.089001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.089236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.089261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.089452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.089477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.089663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.089689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.089820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.089846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.090009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.090035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.090216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.090241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.090412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.090441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.090585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.090615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.090751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.090777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.090929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.090956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.091131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.091160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.091309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.091335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.091489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.091716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.091745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.091924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.091951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.092108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.092135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.092288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.092315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.092495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.092521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.092703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.092729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.445 qpair failed and we were unable to recover it. 00:24:57.445 [2024-07-15 13:17:19.092884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.445 [2024-07-15 13:17:19.092911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.093074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.093100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.093265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.093291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.093444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.093470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.093626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.093652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.093783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.093809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.093973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.093999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.094152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.094178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.094359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.094385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.094545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.094570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.094730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.094756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.094911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.094938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.095073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.095099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.095249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.095275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.095458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.095488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.095643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.095669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.095825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.095850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.095983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.096010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.096172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.096198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.096329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.096356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.096492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.096518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.096671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.096697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.096851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.096883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.097038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.097064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.097217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.097244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.097399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.097426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.097584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.097611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.097807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.097836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.098018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.098045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.098198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.098223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.098421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.098446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.098635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.098661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.098819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.098845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.098990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.099016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.446 qpair failed and we were unable to recover it. 00:24:57.446 [2024-07-15 13:17:19.099171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.446 [2024-07-15 13:17:19.099197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.099347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.099373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.099553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.099578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.099733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.099759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.099900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.099926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.100084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.100110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.100288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.100313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.100471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.100498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.100740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.100769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.100950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.100976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.101211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.101238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.101439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.101464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.101646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.101671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.101831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.101857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.102017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.102043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.102204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.102230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.102359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.102385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.102541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.102568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.102725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.102751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.102898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.102924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.103123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.103156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.103365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.103391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.103540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.103567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.103716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.103743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.103901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.103927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.104081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.104107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.104257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.104283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.104420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.104445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.104602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.104629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.104812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.104838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.104992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.105018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.105197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.105225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.105464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.447 [2024-07-15 13:17:19.105490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.447 qpair failed and we were unable to recover it. 00:24:57.447 [2024-07-15 13:17:19.105670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.105695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.105819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.105845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.105999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.106025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.106184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.106209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.106357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.106383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.106539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.106564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.106728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.106754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.106912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.106939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.107070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.107096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.107218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.107244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.107376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.107402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.107618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.107644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.107802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.107828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.108007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.108037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.108207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.108236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.108406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.108432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.108602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.108630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.108822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.108851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.109015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.109041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.109206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.109233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.109370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.109396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.109545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.109570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.109698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.109726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.109913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.109940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.110095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.110121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.110274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.110301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.110520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.110546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.110675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.110705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.110862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.110894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.111074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.111102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.111249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.111277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.111431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.111457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.111603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.111646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.111843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.111868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.112071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.448 [2024-07-15 13:17:19.112097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.448 qpair failed and we were unable to recover it. 00:24:57.448 [2024-07-15 13:17:19.112256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.112282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.112441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.112468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.112681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.112709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.112887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.112914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.113042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.113068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.113224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.113265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.113439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.113467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.113646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.113671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.113824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.113850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.114037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.114064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.114224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.114250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.114381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.114406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.114597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.114623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.114801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.114827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.115005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.115032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.115198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.449 [2024-07-15 13:17:19.115226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.449 qpair failed and we were unable to recover it. 00:24:57.449 [2024-07-15 13:17:19.115381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.115406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.115558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.115585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.115743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.115786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.115974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.116001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.116156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.116198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.116402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.116428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.116586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.116611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.116768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.116794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.116950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.116992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.117166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.117193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.117392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.117421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.117624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.117650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.117852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.117887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.118087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.118112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.118241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.118267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.118434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.118459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.118616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.118648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.118854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.118887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.119064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.119089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.119245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.119271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.119422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.451 [2024-07-15 13:17:19.119449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.451 qpair failed and we were unable to recover it. 00:24:57.451 [2024-07-15 13:17:19.119611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.119637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.119819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.119845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.120071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.120097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.120279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.120305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.120502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.120530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.120709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.120734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.120969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.120996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.121155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.121197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.121377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.121402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.121541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.121568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.121747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.121773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.122008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.122052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.122255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.122281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.122437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.122480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.122631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.122656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.122777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.122804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.123062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.123090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.123241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.123269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.123447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.123473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.123628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.123655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.123830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.123856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.124044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.124070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.124200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.124226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.124390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.124417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.124571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.124596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.124753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.124778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.124939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.124966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.125089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.125116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.125311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.125340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.125510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.125540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.125684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.125710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.125900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.125929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.126127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.126152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.126285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.126312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.126557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.126585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.452 [2024-07-15 13:17:19.126775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.452 [2024-07-15 13:17:19.126809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.452 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.126990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.127016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.127178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.127204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.127364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.127389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.127548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.127573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.127721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.127747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.127923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.127965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.128146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.128172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.128356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.128381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.128565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.128591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.128742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.128768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.128915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.128941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.129071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.129096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.129224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.129249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.129428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.129454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.129633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.129659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.129816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.129841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.130033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.130059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.130320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.130348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.130506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.130532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.130712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.130738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.130910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.130939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.131147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.131172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.131348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.131377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.131549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.131574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.131750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.131775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.131961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.131987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.132122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.132148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.132305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.132331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.453 [2024-07-15 13:17:19.132484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.453 [2024-07-15 13:17:19.132510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.453 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.132690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.132719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.132894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.132939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.133104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.133130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.133311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.133337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.133467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.133493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.133647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.133672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.133860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.133896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.134077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.134103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.134258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.134283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.134407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.134434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.134589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.134620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.134775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.134801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.134926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.134952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.135088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.135114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.135238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.135263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.135424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.135449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.135578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.135604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.135734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.135760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.135942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.135968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.136214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.136240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.136417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.136443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.136617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.136643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.136824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.136849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.137003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.137032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.723 [2024-07-15 13:17:19.137220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-07-15 13:17:19.137246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.723 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.137424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.137450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.137626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.137656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.137825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.137854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.138037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.138063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.138262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.138290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.138458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.138487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.138661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.138686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.138843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.138868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.139029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.139054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.139188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.139214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.139396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.139422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.139546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.139573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.139755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.139781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.139912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.139939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.140098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.140123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.140275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.140301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.140507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.140536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.140743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.140769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.140923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.140949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.141129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.141155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.141382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.141408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.141556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.141581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.141760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.141785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.141940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.141983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.142159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.142184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.142339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.142365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.142555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.142580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.142728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.142754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.142906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.142950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.143115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.143144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.724 qpair failed and we were unable to recover it. 00:24:57.724 [2024-07-15 13:17:19.143294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-07-15 13:17:19.143320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.143454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.143481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.143634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.143660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.143814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.143840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.143998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.144041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.144243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.144269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.144429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.144455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.144635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.144661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.144840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.144865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.145098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.145123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.145307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.145333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.145455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.145482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.145610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.145636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.145792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.145818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.145971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.146013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.146188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.146214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.146395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.146420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.146615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.146644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.146853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.146883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.147056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.147085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.147291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.147317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.147440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.147465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.147620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.147650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.147776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.147820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.148009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.148036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.148194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.148220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.148411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.148439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.148607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.148633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.148831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.148860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.149018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.149044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.149225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.149250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.149446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.149474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.149668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.149697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.149869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.149902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.150068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.150097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.150270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.150299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.150487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.150513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.725 [2024-07-15 13:17:19.150660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-07-15 13:17:19.150685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.725 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.150851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.150889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.151066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.151092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.151270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.151298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.151492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.151521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.151697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.151723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.151856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.151894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.152073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.152099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.152284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.152310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.152462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.152488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.152685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.152710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.152863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.152897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.153037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.153064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.153191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.153218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.153383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.153409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.153603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.153632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.153800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.153829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.153990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.154016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.154166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.154208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.154388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.154414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.154562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.154588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.154716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.154741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.154894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.154938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.155092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.155117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.155243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.155269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.155455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.155488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.155674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.155699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.155872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.155911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.156106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.156134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.156290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.156315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.156445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.156471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.726 qpair failed and we were unable to recover it. 00:24:57.726 [2024-07-15 13:17:19.156651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-07-15 13:17:19.156677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.156831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.156859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.157042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.157068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.157199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.157224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.157374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.157399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.157552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.157578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.157762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.157787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.157917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.157943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.158076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.158101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.158249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.158274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.158402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.158428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.158583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.158625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.158815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.158843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.159019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.159045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.159246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.159274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.159445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.159474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.159644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.159671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.159847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.159893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.160092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.160121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.160324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.160350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.160521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.160550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.160732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.160759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.160917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.160944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.161096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.161121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.161258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.161285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.161486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.161512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.161656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.161685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.161851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.161887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.162043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.162069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.162229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.162255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.162401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.162427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.162554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.162579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.162728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.162754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.162937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.162964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.163143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.163173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.163376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.163405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.163580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.163606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.727 [2024-07-15 13:17:19.163759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-07-15 13:17:19.163785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.727 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.163939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.163965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.164145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.164171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.164353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.164378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.164507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.164532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.164736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.164765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.164937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.164963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.165118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.165143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.165319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.165348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.165552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.165578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.165780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.165808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.165992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.166018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.166197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.166223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.166421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.166450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.166616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.166646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.166820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.166845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.166987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.167013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.167169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.167196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.167327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.167352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.167483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.167510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.167663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.167689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.167896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.167922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.168098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.168127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.168333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.168362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.168567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.168593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.168802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-07-15 13:17:19.168831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.728 qpair failed and we were unable to recover it. 00:24:57.728 [2024-07-15 13:17:19.169005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.169034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.169229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.169255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.169443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.169472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.169642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.169671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.169873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.169904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.170062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.170088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.170218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.170244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.170419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.170446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.170598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.170641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.170808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.170836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.171021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.171048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.171213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.171243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.171440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.171469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.171672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.171697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.171836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.171864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.172025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.172051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.172200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.172226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.172385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.172411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.172591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.172616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.172789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.172815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.172988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.173017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.173210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.173238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.173437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.173463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.173670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.173696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.173901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.173930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.174121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.174147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.174288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.174318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.174519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.174548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.174749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.174775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.174940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.174966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.175137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.175163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.175283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-07-15 13:17:19.175309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.729 qpair failed and we were unable to recover it. 00:24:57.729 [2024-07-15 13:17:19.175434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.175461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.175647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.175689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.175864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.175911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.176086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.176114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.176293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.176319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.176497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.176524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.176724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.176753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.176925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.176951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.177131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.177156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.177331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.177360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.177499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.177527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.177701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.177727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.177886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.177913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.178065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.178091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.178247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.178441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.178470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.178649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.178677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.178850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.178882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.179044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.179069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.179221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.179266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.179471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.179497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.179625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.179668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.179861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.180086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.180111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.180256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.180281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.180453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.180482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.180657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.180683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.180835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.180864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.181050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.181076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.181233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.181258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.181384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.181410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.181594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.181620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.181755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.181780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.181963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.181989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.182161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.182190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.182370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.182395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.730 qpair failed and we were unable to recover it. 00:24:57.730 [2024-07-15 13:17:19.182590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-07-15 13:17:19.182618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.182787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.182817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.183018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.183045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.183227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.183255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.183413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.183439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.183627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.183653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.183798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.183825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.184016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.184045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.184201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.184226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.184350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.184391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.184564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.184592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.184768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.184794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.184944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.184970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.185117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.185143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.185292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.185318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.185488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.185517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.185682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.185711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.185857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.185889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.186094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.186122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.186290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.186319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.186495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.186521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.186676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.186702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.186911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.186940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.187110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.187140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.187297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.187323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.187474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.187500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.187663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.187688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.187841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.187867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.188036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.188063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.188244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.188270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.188443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.188472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.188608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.188637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.188805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.188834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.189018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.189045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.189238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.189264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.189444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.189470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.189600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.189626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.189818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.189860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.190053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.190079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.190232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-07-15 13:17:19.190274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.731 qpair failed and we were unable to recover it. 00:24:57.731 [2024-07-15 13:17:19.190470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.190499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.190673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.190699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.190869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.190902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.191094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.191119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.191296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.191322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.191451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.191476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.191603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.191629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.191754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.191779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.191952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.191981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.192181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.192210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.192416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.192442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.192619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.192647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.192824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.192849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.193011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.193037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.193187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.193213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.193368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.193411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.193614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.193640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.193813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.193842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.194030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.194059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.194211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.194238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.194365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.194407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.194604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.194633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.194835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.194863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.195023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.195054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.195238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.195264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.195381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.195407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.195561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.195587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.195761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.195790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.195996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.196023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.196215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.196244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.196413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.196442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.196616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.196642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.196777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.196804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.196998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.197028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.197174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.197200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.197371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.197396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.197551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.197577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.197761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.197787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.197988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.198016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.198223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-07-15 13:17:19.198252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.732 qpair failed and we were unable to recover it. 00:24:57.732 [2024-07-15 13:17:19.198425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.198450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.198621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.198650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.198850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.198885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.199062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.199088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.199284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.199313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.199479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.199505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.199660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.199685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.199846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.199874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.200072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.200098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.200249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.200276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.200427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.200481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.200690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.200725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.200910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.200941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.201092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.201122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.201324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.201353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.733 [2024-07-15 13:17:19.201548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-07-15 13:17:19.201577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:57.733 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.201830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.201858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.202038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.202067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.202219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.202244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.202374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.202400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.202533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.202560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.202688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.202715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.202943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.202969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.203125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.203174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.203354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.203380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.203534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.203559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.203715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.203740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.203919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.203945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.204112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.204140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.204335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.204364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.204541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.204567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.204722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.204749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.204944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.204974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.205154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.205179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.205369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.205398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.205571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.205599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.205742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.205768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.205938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.205964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.206202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.206228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.206357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.206383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.206563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.206589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.206746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.206772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.206957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.206983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.207175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.207201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.207350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.207392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.207572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.207598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.207775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.207803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.207970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.208000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.208203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.208229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.208411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.208439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.208620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.208649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.208822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.208848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.209081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.209111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.209308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.209337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.209509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.209535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.209667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.209693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.734 [2024-07-15 13:17:19.209895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-07-15 13:17:19.209924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.734 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.210126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.210152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.210357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.210383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.210617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.210642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.210815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.210844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.211032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.211058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.211231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.211259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.211402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.211433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.211622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.211676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.211846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.211874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.212076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.212102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.212302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.212331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.212470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.212499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.212678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.212704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.212884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.212914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.213114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.213143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.213318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.213344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.213542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.213571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.213766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.213795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.213972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.213999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.214181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.214207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.214383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.214412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.214589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.214615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.214812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.214840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.215026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.215052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.215205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.215231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.215400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.215429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.215612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.215638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.215798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.215824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.216004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.216030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.216213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.216238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.216366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.216392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.216584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.216613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.216780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.216808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.216984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.217010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.217167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.217193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.217318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.217344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.217497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.217524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.217685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.217727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.217899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.217929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.218128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-07-15 13:17:19.218153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.735 qpair failed and we were unable to recover it. 00:24:57.735 [2024-07-15 13:17:19.218328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.218356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.218537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.218563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.218695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.218722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.218906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.218932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.219123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.219149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.219331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.219356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.219490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.219520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.219677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.219703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.219872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.219913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.220115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.220144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.220325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.220353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.220527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.220553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.220681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.220708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.220902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.220928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.221089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.221114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.221264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.221289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.221496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.221524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.221699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.221725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.221900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.221926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.222057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.222083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.222245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.222271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.222402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.222428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.222597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.222622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.222812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.222837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.222998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.223024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.223201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.223226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.223406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.223432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.223625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.223653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.223819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.223847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.224027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.224053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.224204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.224229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.224388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.224415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.224569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.224595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.224784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.224809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.224960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.224987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.225180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.225206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.225359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.225384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.225505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.225531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.225683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.225708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.225895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.225921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.226102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-07-15 13:17:19.226128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.736 qpair failed and we were unable to recover it. 00:24:57.736 [2024-07-15 13:17:19.226257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.226284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.226435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.226480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.226721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.226747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.226943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.226969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.227144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.227173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.227336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.227370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.227579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.227605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.227741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.227766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.228019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.228048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.228205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.228230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.228429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.228458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.228597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.228625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.228805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.228831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.228992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.229019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.229196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.229224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.229395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.229422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.229620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.229649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.229819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.229847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.230010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.230036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.230240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.230269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.230441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.230470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.230619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.230644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.230823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.230849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.231006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.231032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.231148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.231174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.231323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.231348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.231497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.231541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.231717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.231743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.231923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.231949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.232071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.232097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.232249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.232275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.232459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.232485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.232660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.232689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.232852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-07-15 13:17:19.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.737 qpair failed and we were unable to recover it. 00:24:57.737 [2024-07-15 13:17:19.233060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.233088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.233255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.233284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.233445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.233471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.233653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.233679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.233829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.233873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.234058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.234083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.234255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.234284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.234491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.234517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.234698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.234723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.234933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.234959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.235092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.235117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.235267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.235297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.235437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.235479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.235728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.235756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.235927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.235956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.236137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.236163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.236345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.236370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.236544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.236572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.236742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.236769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.236920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.236946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.237109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.237134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.237345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.237373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.237571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.237599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.237774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.237800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.237959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.238000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.238183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.238208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.238335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.238377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.238527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.238552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.238719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.238744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.238894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.238920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.239080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.239105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.239319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.239344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.239518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.239546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.239714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.239744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.239915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.239944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.240100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.240126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.240263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.240290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.240479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.240505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.240681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.240725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.240899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-07-15 13:17:19.240925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.738 qpair failed and we were unable to recover it. 00:24:57.738 [2024-07-15 13:17:19.241077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.241103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.241318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.241344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.241527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.241553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.241733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.241759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.241939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.241969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.242148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.242173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.242374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.242403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.242602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.242628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.242827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.242855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.243066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.243092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.243261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.243286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.243443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.243469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.243632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.243657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.243855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.243891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.244060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.244088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.244265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.244290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.244415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.244440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.244596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.244621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.244827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.244856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.245016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.245041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.245174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.245200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.245355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.245380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.245591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.245616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.245807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.245833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.246022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.246051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.246243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.246269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.246424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.246450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.246601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.246627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.246752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.246778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.246915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.246942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.247102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.247147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.247350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.247376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.247562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.247589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.247741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.247767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.247971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.248001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.248182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.248207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.248340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.248366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.248509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.248535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.248719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.248749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.248905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-07-15 13:17:19.248932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.739 qpair failed and we were unable to recover it. 00:24:57.739 [2024-07-15 13:17:19.249088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.249114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.249242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.249283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.249479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.249507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.249709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.249735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.249922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.249952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.250170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.250198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.250367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.250395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.250571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.250598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.250724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.250766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.250948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.250974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.251124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.251150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.251338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.251364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.251540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.251569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.251779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.251805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.251959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.251986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.252145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.252171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.252349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.252375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.252610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.252652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.252854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.252887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.253023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.253051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.253248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.253277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.253444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.253474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.253643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.253671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.253874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.253906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.254108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.254136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.254308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.254338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.254513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.254543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.254747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.254774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.254959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.254985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.255139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.255165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.255350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.255379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.255577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.255603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.255788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.255814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.255999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.256028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.256197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.256226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.256399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.256425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.256599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.256628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.256799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.256828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.257029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.257063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.257239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-07-15 13:17:19.257265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.740 qpair failed and we were unable to recover it. 00:24:57.740 [2024-07-15 13:17:19.257407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.257435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.257615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.257641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.257791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.257817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.257954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.257981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.258144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.258188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.258383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.258412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.258548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.258576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.258721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.258746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.258901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.258929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.259085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.259111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.259257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.259282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.259439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.259464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.259675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.259703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.259839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.259869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.260072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.260098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.260282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.260308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.260485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.260514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.260679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.260709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.260914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.260941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.261091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.261117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.261291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.261319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.261515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.261543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.261741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.261769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.261914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.261941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.262123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.262152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.262354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.262383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.262563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.262588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.262738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.262764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.262886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.262912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.263064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.263107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.263275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.263304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.263476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.263502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.263683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.263709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.263833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.263859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.264085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.264113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.264290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.264316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.264437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.264463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.264620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.264646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.264819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.264851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.265034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-07-15 13:17:19.265061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.741 qpair failed and we were unable to recover it. 00:24:57.741 [2024-07-15 13:17:19.265215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.265242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.265371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.265397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.265577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.265607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.265846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.265872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.266055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.266084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.266276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.266304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.266493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.266522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.266722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.266748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.266970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.266997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.267152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.267196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.267402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.267427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.267555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.267582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.267769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.267795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.267952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.267978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.268140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.268166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.268354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.268379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.268572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.268600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.268772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.268800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.268995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.269023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.269194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.269220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.269343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.269387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.269559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.269588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.269786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.269815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.270009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.270035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.270194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.270219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.270406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.270432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.270635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.270663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.270835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.270861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.271036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.271065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.271210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.271238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.271416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.271445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.271644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.271670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.271895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.271921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.272076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-15 13:17:19.272102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.742 qpair failed and we were unable to recover it. 00:24:57.742 [2024-07-15 13:17:19.272277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.272303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.272455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.272480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.272636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.272662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.272820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.272848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.273048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.273082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.273286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.273313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.273548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.273573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.273806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.273834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.274008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.274037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.274203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.274228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.274471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.274500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.274692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.274720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.274889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.274919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.275097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.275123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.275364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.275393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.275559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.275588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.275776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.275804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.275986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.276012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.276187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.276216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.276411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.276439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.276607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.276636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.276820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.276846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.277010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.277040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.277212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.277240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.277412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.277440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.277614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.277640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.277808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.277836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.278089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.278117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.278310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.278339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.278539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.278565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.278748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.278777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.278949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.278978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.279176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.279204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.279421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.279447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.279648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.279677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.279889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.279915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.280076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.280101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.280227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.280254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.280434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.280463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.280635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-07-15 13:17:19.280664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.743 qpair failed and we were unable to recover it. 00:24:57.743 [2024-07-15 13:17:19.280833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.280862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.281062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.281087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.281261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.281289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.281460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.281490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.281656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.281692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.281868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.281900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.282104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.282132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.282297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.282325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.282490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.282519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.282663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.282689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.282843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.282891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.283033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.283062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.283228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.283256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.283424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.283449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.283622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.283651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.283816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.283846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.284062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.284092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.284296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.284322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.284522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.284550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.284720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.284749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.284896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.284926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.285126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.285152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.285329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.285358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.285606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.285634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.285832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.285860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.286014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.286041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.286177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.286219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.286385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.286414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.286583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.286611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.286804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.286833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.286999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.287026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.287212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.287238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.287418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.287448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.287652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.287678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.287809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.287835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.287996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.288023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.288198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.288227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.288400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.288425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.288602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.288631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.288775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-07-15 13:17:19.288803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.744 qpair failed and we were unable to recover it. 00:24:57.744 [2024-07-15 13:17:19.288949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.288980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.289159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.289186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.289338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.289367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.289508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.289538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.289740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.289773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.289951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.289978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.290155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.290184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.290325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.290354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.290547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.290576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.290733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.290759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.290914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.290940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.291072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.291115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.291310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.291339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.291544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.291569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.291744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.291772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.291942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.291972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.292167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.292196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.292435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.292460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.292652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.292681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.292851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.292886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.293057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.293087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.293263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.293290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.293468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.293497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.293698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.293727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.293901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.293929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.294111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.294135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.294310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.294336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.294532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.294559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.294750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.294777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.294952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.294976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.295106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.295148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.295348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.295375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.295566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.295592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.295759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.295786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.295963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.295987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.296169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.296195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.296375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.296398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.296579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.296603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.296799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.296825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.296993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.297021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.745 [2024-07-15 13:17:19.297188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-07-15 13:17:19.297214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.745 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.297424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.297448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.317845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.317897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.318097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.318127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.318324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.318358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.318572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.318597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.318750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.318779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.318980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.319009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.319153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.319181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.319385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.319409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.319669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.319697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.319903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.319931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.320067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.320095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.320263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.320291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.320453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.320481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.320679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.320707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.320903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.320931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.321077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.321104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.321293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.321322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.321492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.321519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.321686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.321714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.337494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.337537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.337754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.337784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.337974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.338000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.338151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.338176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.338373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.338401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.338574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.338601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.338767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.338792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.338952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.338999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.339183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.339211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.339381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.339408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.339621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.339645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.339952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.339981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.746 qpair failed and we were unable to recover it. 00:24:57.746 [2024-07-15 13:17:19.340187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-07-15 13:17:19.340215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.340361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.340388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.340571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.340596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.340808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.340835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.341061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.341089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.341246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.341274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.341431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.341457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.341617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.341660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.341811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.341839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.341995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.342021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.342152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.342176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.342349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.342381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.342537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.342568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.342786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.342811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.342999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.343024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.343184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.343208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.343362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.343387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.343527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.343551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.343676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.343700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.343866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.343899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.344036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.344063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.344220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.344246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.344431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.344456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.344626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.344651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.344884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.344910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.345059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.345085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.345248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.345273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.345483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.345511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.345693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.345718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.345893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.345927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.346066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.346092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.346278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.346326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.346481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.346526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.346678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.346706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.346861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.346901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.347054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.347080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.347242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.347268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.347472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.347500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.347695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.347720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.347957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.347983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.348256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-07-15 13:17:19.348284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.747 qpair failed and we were unable to recover it. 00:24:57.747 [2024-07-15 13:17:19.348481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.348510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.348714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.348740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.348983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.349009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.349226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.349254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.349405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.349435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.349594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.349620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.349761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.349811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.350020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.350047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.350248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.350277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.350528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.350554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.350734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.350768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.350917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.350962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.351168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.351196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.351418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.351443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.351653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.351681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.351826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.351856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.352062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.352088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.352272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.352297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.352468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.352496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.352667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.352696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.352884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.352909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.353092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.353117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.353330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.353356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.353516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.353541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.353684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.353709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.353868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.353900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.354072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.354098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.354358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.354386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.354603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.354630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.354840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.354866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.355046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.355071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.355241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.355266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.355424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.355452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.355630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.355654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.355867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.355905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.356119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.356150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.356337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.356362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.356524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.356549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.356716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.356745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.356904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.356958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.357185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-07-15 13:17:19.357210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.748 qpair failed and we were unable to recover it. 00:24:57.748 [2024-07-15 13:17:19.357379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.357404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.749 qpair failed and we were unable to recover it. 00:24:57.749 [2024-07-15 13:17:19.357652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.357679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.749 qpair failed and we were unable to recover it. 00:24:57.749 [2024-07-15 13:17:19.357884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.357910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.749 qpair failed and we were unable to recover it. 00:24:57.749 [2024-07-15 13:17:19.358154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.358201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.749 qpair failed and we were unable to recover it. 00:24:57.749 [2024-07-15 13:17:19.358387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.358412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.749 qpair failed and we were unable to recover it. 00:24:57.749 [2024-07-15 13:17:19.358541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.358566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.749 qpair failed and we were unable to recover it. 00:24:57.749 [2024-07-15 13:17:19.358709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.358733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:57.749 qpair failed and we were unable to recover it. 00:24:57.749 [2024-07-15 13:17:19.358910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-07-15 13:17:19.358939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.822336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.822383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.822521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.822554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.822749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.822775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.822906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.822933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.823118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.823143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.823316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.823341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.823523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.823549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.823690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.823717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.823866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.823912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.824057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.824083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.824246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.824271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.824412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.824438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.824559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.824585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.824730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.824758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.824949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.824976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.825136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.825175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.825337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.825363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.825518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.825545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.825706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.825733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.825931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.825959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.826125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.826153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.826316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.330 [2024-07-15 13:17:19.826343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.330 qpair failed and we were unable to recover it. 00:24:58.330 [2024-07-15 13:17:19.826498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.826525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.826665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.826704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.826844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.826886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.827060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.827087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.827278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.827305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.827472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.827499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.827662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.827689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.827849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.827889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.828045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.828072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.828205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.828246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.828402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.828428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.828607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.828634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.828881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.828909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.829165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.829194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.829359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.829386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.829543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.829571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.829752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.829779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.829936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.829963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.830097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.830124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.830269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.830299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.830541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.830568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.830788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.830815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.830988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.831016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.831177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.831219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.831398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.831425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.831612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.831639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.831793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.831820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.832078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.832105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.832354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.832380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.832599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.832626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.832803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.832830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.833001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.833028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.833182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.833208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.833362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.833389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.833631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.833659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.833822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.833849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.834106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.834133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.834353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.834380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.834535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.834561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.834710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.834737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.331 [2024-07-15 13:17:19.834893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.331 [2024-07-15 13:17:19.834921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.331 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.835107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.835134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.835305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.835331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.835457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.835483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.835636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.835664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.835829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.835858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.836012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.836040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.836198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.836224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.836387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.836413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.836559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.836586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.836778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.836804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.837040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.837068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.837244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.837271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.837426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.837454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.837605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.837633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.837812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.837838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.838038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.838066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.838236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.838263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.838414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.838441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.838596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.838627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.838806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.838833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.838970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.838997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.839160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.839190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.839350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.839376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.839560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.839586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.839744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.839771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.839938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.839966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.840089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.840116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.840284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.840312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.840552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.840580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.840725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.840752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.840897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.840924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.841050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.841077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.841233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.841261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.841442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.841469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.841599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.841626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.841754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.841781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.841979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.842006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.842140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.842174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.842355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.842382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.332 [2024-07-15 13:17:19.842561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.332 [2024-07-15 13:17:19.842588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.332 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.842773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.842800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.842953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.842980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.843112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.843140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.843313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.843339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.843498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.843526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.843674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.843701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.843862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.843905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.844085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.844112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.844286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.844314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.844502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.844530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.844668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.844696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.844883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.844911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.845079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.845106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.845260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.845287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.845444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.845471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.845654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.845681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.845810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.845839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.845988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.846015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.846169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.846200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.846360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.846387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.846522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.846551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.846707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.846736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.846891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.846919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.847073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.847100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.847285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.847313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.847462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.847490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.847725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.847753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.847940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.847968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.848202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.848230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.848382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.848410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.848561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.848589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.848769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.848796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.848991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.849019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.849200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.849228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.849361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.849388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.849543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.849570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.849726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.849754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.849885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.849912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.850050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.850076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.850212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.333 [2024-07-15 13:17:19.850240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.333 qpair failed and we were unable to recover it. 00:24:58.333 [2024-07-15 13:17:19.850399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.850427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.850581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.850616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.850769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.850797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.850950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.850978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.851179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.851207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.851361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.851404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.851557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.851585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.851734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.851762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.851894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.851933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.852123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.852150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.852308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.852351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.852518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.852546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.852709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.852736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.852897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.852932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.853063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.853088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.853220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.853258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.853422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.853449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.853577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.853604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.853785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.853812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.853972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.853999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.854184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.854211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.854365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.854393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.854525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.854559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.854717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.854745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.854881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.854910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.855045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.855071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.855228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.855254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.855407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.855433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.855556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.855583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.855764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.855791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.855947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.855973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.856127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.856153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.334 [2024-07-15 13:17:19.856326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.334 [2024-07-15 13:17:19.856353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.334 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.856537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.856564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.856746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.856772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.856937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.856964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.857114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.857140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.857264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.857292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.857428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.857464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.857611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.857637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.857801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.857828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.858001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.858027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.858183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.858210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.858363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.858390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.858545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.858572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.858730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.858760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.858925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.858952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.859109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.859135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.859327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.859368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.859537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.859566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.859730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.859757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.859900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.859934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.860123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.860150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.860347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.860374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.860535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.860562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.860720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.860750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.860940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.860967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.861170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.861198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.861383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.861411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.861585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.861611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.861806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.861834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.862008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.862036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.862217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.862246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.862407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.862434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.862596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.862623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.862807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.862835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.862991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.863018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.863183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.863210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.863357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.863385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.863545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.863573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.863763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.863790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.863952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.863980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.864177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.335 [2024-07-15 13:17:19.864205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.335 qpair failed and we were unable to recover it. 00:24:58.335 [2024-07-15 13:17:19.864365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.864392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.864578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.864606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.864794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.864821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.864978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.865005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.865172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.865200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.865346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.865373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.865557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.865585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.865748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.865775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.865954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.865983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.866172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.866201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.866393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.866421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.866606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.866632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.866799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.866831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.866980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.867008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.869892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.869934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.870103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.870130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.870327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.870355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.870529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.870557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.870745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.870773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.870935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.870962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.871123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.871150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.871337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.871365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.871515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.871542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.871701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.871728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.871872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.871908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.872104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.872132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.872336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.872364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.872532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.872560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.872744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.872782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.872923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.872953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.873179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.873221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.873417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.873445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.873582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.873610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.873767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.873794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.873954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.873981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.874117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.874143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.874310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.874339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.874498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.874525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.874687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.874714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.874907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.874941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.875076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.336 [2024-07-15 13:17:19.875103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.336 qpair failed and we were unable to recover it. 00:24:58.336 [2024-07-15 13:17:19.875239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.875266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.875422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.875449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.875603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.875630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.875788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.875815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.876004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.876031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.876190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.876226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.876384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.876411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.876544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.876570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.876754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.876781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.876938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.876966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.877145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.877182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.877305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.877332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.877522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.877549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.877698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.877725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.877887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.877925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.878072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.878099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.878266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.878292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.878477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.878503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.878641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.878667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.878886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.878924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.879053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.879093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.879251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.879278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.879436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.879463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.879595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.879621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.879780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.879806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.879942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.879973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.880104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.880130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.880267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.880293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.880477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.880504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.880658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.880684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.880833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.880860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.881008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.881035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.881181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.881207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.881338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.881364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.881483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.881510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.881660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.881686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.881819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.881860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.882018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.882045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.882225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.882251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.882382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.882407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.882559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.337 [2024-07-15 13:17:19.882584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.337 qpair failed and we were unable to recover it. 00:24:58.337 [2024-07-15 13:17:19.882762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.882788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.882966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.882993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.883143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.883169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.883301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.883326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.883478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.883515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.883717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.883743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.883869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.883900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.884039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.884065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.884245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.884271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.884431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.884457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.884610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.884636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.884791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.884821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.884989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.885015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.885170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.885197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.885351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.885377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.885504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.885530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.885709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.885735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.885859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.885890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.886048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.886074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.886197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.886223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.886381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.886409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.886596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.886623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.886774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.886800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.886925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.886950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.887102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.887128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.887286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.887313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.887441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.887468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.887626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.887652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.887802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.887828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.887986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.888013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.888166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.888203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.888362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.888388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.888532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.888559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.888739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.888765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.888933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.888959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.889092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.889120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.889261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.889288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.889447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.889473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.889652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.889682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.889834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.889861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.338 qpair failed and we were unable to recover it. 00:24:58.338 [2024-07-15 13:17:19.890036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.338 [2024-07-15 13:17:19.890063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.890193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.890220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.890372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.890399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.890584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.890611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.890757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.890784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.890950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.890977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.891111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.891136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.891299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.891326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.891491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.891517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.891673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.891700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.891858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.891889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.892044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.892070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.892199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.892225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.892408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.892434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.892603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.892630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.892787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.892814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.892959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.892985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.893166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.893203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.893327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.893354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.893519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.893546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.893673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.893699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.893865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.893896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.894022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.894048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.894170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.894199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.894352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.894378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.894544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.894570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.894728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.894754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.894959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.894985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.895140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.895178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.895329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.895356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.895585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.895612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.895797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.895823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.895983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.896009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.896139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.896164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.896299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.896325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.896450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.896476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.896657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.339 [2024-07-15 13:17:19.896683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.339 qpair failed and we were unable to recover it. 00:24:58.339 [2024-07-15 13:17:19.896836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.896862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.897005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.897031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.897183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.897210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.897369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.897395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.897625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.897651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.897830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.897857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.898071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.898097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.898220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.898246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.898367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.898392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.898548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.898574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.898706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.898733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.898887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.898913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.899070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.899096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.899256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.899282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.899436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.899464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.899620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.899647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.899831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.899857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.900029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.900055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.900211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.900238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.900395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.900422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.900573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.900599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.900729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.900756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.900942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.900968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.901124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.901150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.901340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.901367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.901496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.901521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.901671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.901701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.901865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.901896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.902054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.902081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.902240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.902270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.902397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.902424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.902578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.902604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.902740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.902766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.902893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.902931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.903084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.903109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.903259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.903286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.903411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.903437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.903595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.903622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.903797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.903824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.903983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.904009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.904166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.904200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.340 qpair failed and we were unable to recover it. 00:24:58.340 [2024-07-15 13:17:19.904364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.340 [2024-07-15 13:17:19.904390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.904544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.904570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.904806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.904832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.905017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.905043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.905202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.905229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.905384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.905410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.905567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.905593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.905778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.905804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.905985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.906012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.906141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.906167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.906315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.906342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.906490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.906517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.906670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.906696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.906870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.906902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.907036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.907062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.907218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.907247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.907431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.907457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.907605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.907632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.907789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.907816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.907978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.908005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.908154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.908191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.908343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.908370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.908551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.908578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.908730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.908756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.908886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.908923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.909061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.909087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.909245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.909271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.909422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.909448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.909598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.909624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.909815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.909842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.910023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.910049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.910204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.910232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.910379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.910406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.910565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.910592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.910770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.910796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.910921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.910949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.911105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.911131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.911266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.911292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.911461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.911488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.911635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.911661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.911784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.911810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.341 [2024-07-15 13:17:19.911977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.341 [2024-07-15 13:17:19.912003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.341 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.912192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.912219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.912368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.912394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.912546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.912574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.912740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.912766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.912928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.912954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.913102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.913127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.913296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.913322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.913453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.913480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.913615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.913641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.913823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.913849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.914029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.914055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.914213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.914240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.914393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.914429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.914591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.914627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.914790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.914822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.914960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.914985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.915115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.915141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.915307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.915333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.915514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.915540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.915693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.915720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.915904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.915931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.916079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.916105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.916252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.916279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.916435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.916460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.916612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.916638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.916826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.916852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.917015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.917042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.917174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.917200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.917330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.917356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.917517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.917543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.917694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.917720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.917843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.917871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.918070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.918097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.918245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.918272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.918426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.918452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.918574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.918600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.918751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.918778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.918945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.918972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.919103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.919130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.919287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.919314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.919467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.342 [2024-07-15 13:17:19.919494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.342 qpair failed and we were unable to recover it. 00:24:58.342 [2024-07-15 13:17:19.919676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.919706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.919867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.919898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.920065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.920092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.920273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.920299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.920430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.920456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.920589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.920617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.920798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.920824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.920985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.921012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.921172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.921199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.921355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.921382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.921515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.921542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.921698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.921724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.921873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.921911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.922070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.922097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.922227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.922253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.922408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.922434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.922582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.922609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.922758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.922784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.922943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.922970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.923121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.923148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.923276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.923301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.923454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.923481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.923638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.923664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.923811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.923838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.924022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.924049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.924203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.924229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.924385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.924412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.924541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.924571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.924730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.924756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.924915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.924942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.925065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.925092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.925215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.925241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.925398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.925424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.925602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.925629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.925783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.925809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.343 qpair failed and we were unable to recover it. 00:24:58.343 [2024-07-15 13:17:19.925936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.343 [2024-07-15 13:17:19.925963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.926144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.926171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.926338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.926365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.926543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.926569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.926726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.926753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.926911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.926936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.927082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.927109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.927235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.927262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.927430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.927456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.927581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.927607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.927751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.927778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.927934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.927963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.928120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.928147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.928327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.928354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.928503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.928529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.928706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.928733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.928890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.928917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.929065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.929092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.929241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.929268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.929432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.929461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.929622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.929649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.929763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.929790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.929950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.929977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.930112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.930139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.930291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.930318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.930453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.930479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.930662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.930688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.930855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.930887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.931038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.931065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.931220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.931246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.931402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.931429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.931577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.931603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.931767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.931794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.931972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.931998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.932148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.932174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.932340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.932367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.932526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.932552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.932687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.932714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.932889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.932916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.933044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.933070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.933228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.344 [2024-07-15 13:17:19.933254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.344 qpair failed and we were unable to recover it. 00:24:58.344 [2024-07-15 13:17:19.933383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.933409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.933538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.933564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.933748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.933774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.933914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.933940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.934065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.934092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.934240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.934267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.934419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.934445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.934626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.934653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.934834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.934860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.935045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.935071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.935200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.935226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.935385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.935411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.935542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.935569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.935728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.935755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.935923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.935950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.936072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.936099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.936256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.936283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.936465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.936491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.936641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.936667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.936797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.936823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.936980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.937006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.937138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.937164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.937316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.937342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.937499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.937525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.937715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.937741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.937896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.937923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.938095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.938122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.938304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.938331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.938487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.938513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.938669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.938696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.938853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.938884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.939052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.939078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.939232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.939258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.939408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.939434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.939614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.939641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.939801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.939827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.939985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.940012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.940160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.940186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.940342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.940369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.940519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.940546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.940732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.940758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.345 qpair failed and we were unable to recover it. 00:24:58.345 [2024-07-15 13:17:19.940914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.345 [2024-07-15 13:17:19.940942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.941094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.941121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.941270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.941296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.941456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.941482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.941639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.941667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.941847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.941882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.942039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.942067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.942223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.942251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.942409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.942435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.942590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.942617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.942777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.942804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.942987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.943014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.943163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.943190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.943345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.943371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.943526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.943553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.943741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.943767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.943926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.943953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.944078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.944105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.944261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.944288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.944472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.944499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.944656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.944683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.944832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.944859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.945019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.945045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.945229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.945255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.945435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.945462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.945623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.945650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.945804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.945831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.946015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.946042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.946171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.946198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.946379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.946406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.946563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.946590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.946745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.946773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.946958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.946991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.947123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.947149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.947324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.947351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.947468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.947495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.947644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.947675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.947799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.947827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.947967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.948005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.948194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.948220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.948375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.948402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.346 qpair failed and we were unable to recover it. 00:24:58.346 [2024-07-15 13:17:19.948583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.346 [2024-07-15 13:17:19.948609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.948761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.948788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.948908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.948935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.949063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.949089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.949248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.949275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.949428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.949455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.949645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.949671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.949847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.949873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.950069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.950096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.950250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.950277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.950466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.950496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.950651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.950678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.950822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.950849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.951014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.951041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.951223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.951250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.951398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.951424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.951580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.951606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.951728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.951754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.951912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.951940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.952100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.952128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.952279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.952306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.952432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.952459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.952575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.952601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.952759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.952786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.952971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.952998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.953151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.953178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.953328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.953354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.953474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.953501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.953630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.953656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.953807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.953834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.954006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.954033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.954184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.954211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.954394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.954421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.954555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.954583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.954740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.954767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.954901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.347 [2024-07-15 13:17:19.954927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.347 qpair failed and we were unable to recover it. 00:24:58.347 [2024-07-15 13:17:19.955064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.955090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.955212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.955239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.955420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.955446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.955598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.955625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.955786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.955813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.955961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.955988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.956141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.956167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.956292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.956318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.956476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.956502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.956652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.956678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.956830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.956857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.957021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.957048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.957203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.957230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.957352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.957379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.957501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.957527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.957658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.957684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.957839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.957865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.958037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.958065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.958245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.958272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.958427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.958453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.958630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.958657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.958806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.958833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.958966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.958993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.959150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.959180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.959333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.959360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.959511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.959538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.959693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.959720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.959867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.959901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.960058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.960085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.960217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.960243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.960408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.960434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.960567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.960593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.960719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.960746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.960901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.960927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.961059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.961086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.961211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.961238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.961390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.961416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.961577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.961604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.961728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.961754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.961909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.348 [2024-07-15 13:17:19.961937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.348 qpair failed and we were unable to recover it. 00:24:58.348 [2024-07-15 13:17:19.962093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.962120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.962271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.962297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.962472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.962498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.962654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.962680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.962826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.962852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.963014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.963041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.963163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.963190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.963346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.963373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.963500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.963528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.963696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.963722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.963843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.963874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.964057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.964083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.964236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.964263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.964393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.964419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.964571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.964598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.964779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.964805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.964984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.965011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.965136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.965163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.965316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.965343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.965493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.965520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.965644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.965670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.965845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.965871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.966001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.966028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.966181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.966208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.966368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.966395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.966544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.966570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.966732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.966759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.966885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.966911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.967048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.967075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.967204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.967230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.967364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.967390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.967546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.967573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.967726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.967752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.967891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.967918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.968082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.968109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.968235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.968262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.968392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.968418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.968599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.968629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.968784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.968811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.968958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.968985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.349 qpair failed and we were unable to recover it. 00:24:58.349 [2024-07-15 13:17:19.969118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.349 [2024-07-15 13:17:19.969145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.969298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.969326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.969478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.969505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.969662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.969689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.969820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.969848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.970036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.970064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.970241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.970268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.970436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.970463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.970620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.970646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.970778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.970805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.970951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.970978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.971163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.971189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.971352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.971529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.971555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.971714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.971740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.971917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.971944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.972103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.972129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.972245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.972271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.972449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.972475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.972603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.972629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.972786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.972812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.972992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.973018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.973148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.973174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.973330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.973357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.973479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.973507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.973667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.973693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.973822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.973849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.973973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.974000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.974135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.974162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.974282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.974308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.974461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.974487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.974608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.974634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.974814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.974840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.975000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.975026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.975193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.975220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.975375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.975402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.975570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.975596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.975736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.975763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.975943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.975971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.976097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.976123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.976273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.976300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.350 qpair failed and we were unable to recover it. 00:24:58.350 [2024-07-15 13:17:19.976456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.350 [2024-07-15 13:17:19.976482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.976629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.976656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.976813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.976839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.976976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.977003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.977158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.977185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.977340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.977366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.977546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.977572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.977739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.977766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.977949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.977976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.978099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.978127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.978306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.978333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.978490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.978517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.978692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.978718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.978873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.978992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.979159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.979185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.979359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.979385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.979540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.979566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.979721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.979748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.979905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.979931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.980064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.980091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.980261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.980288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.980455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.980481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.980605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.980631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.980761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.980787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.980940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.980971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.981138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.981164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.981314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.981340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.981470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.981498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.981680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.981707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.981871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.981904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.982036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.982063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.982210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.982237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.982390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.982417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.982597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.982624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.982776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.982804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.982964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.982992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.983132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.983159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.351 qpair failed and we were unable to recover it. 00:24:58.351 [2024-07-15 13:17:19.983341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.351 [2024-07-15 13:17:19.983368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.983506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.983532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.983681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.983707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.983886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.983913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.984061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.984087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.984218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.984245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.984400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.984426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.984548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.984574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.984734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.984761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.984914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.984941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.985107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.985134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.985311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.985337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.985514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.985540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.985723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.985749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.985916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.985947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.986132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.986158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.986341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.986368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.986489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.986515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.986644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.986671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.986822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.986849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.987035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.987062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.987231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.987257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.987387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.987413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.987571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.987598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.987756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.987783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.987945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.987973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.988123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.988150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.988310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.988336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.988518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.988545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.988697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.988723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.988908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.988935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.989094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.989120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.989266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.989293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.989468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.989494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.989647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.989674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.989826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.989852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.990013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.990039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.990186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.990212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.990368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.990394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.990549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.352 [2024-07-15 13:17:19.990575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.352 qpair failed and we were unable to recover it. 00:24:58.352 [2024-07-15 13:17:19.990722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.990748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.990904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.990931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.991080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.991107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.991305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.991331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.991489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.991515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.991647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.991673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.991793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.991820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.991953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.991980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.992160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.992186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.992335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.992361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.992495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.992521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.992668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.992694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.992839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.992865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.993049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.993075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.993225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.993251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.993407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.993434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.993560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.993585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.993752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.993778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.993927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.993954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.994110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.994136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.994317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.994343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.994488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.994514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.994665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.994691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.994812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.994839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.994972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.995000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.995132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.995159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.995284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.995310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.995486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.995513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.995694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.995721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.995851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.995884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.996066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.996094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.996247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.996273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.996452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.996479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.996608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.996637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.996805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.996832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.996997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.997024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.997180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.997206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.997354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.997380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.997553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.997580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.997727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.997753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.997868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.997901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.353 qpair failed and we were unable to recover it. 00:24:58.353 [2024-07-15 13:17:19.998053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.353 [2024-07-15 13:17:19.998080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.998241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.998271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.998451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.998478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.998658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.998685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.998808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.998835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.998990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.999017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.999177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.999204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.999384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.999410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.999526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.999553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.999738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.999764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:19.999939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:19.999967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.000147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.000174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.000337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.000363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.000543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.000570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.000718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.000748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.000917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.000942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.001097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.001122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.001265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.001292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.001462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.001488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.001637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.001663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.001821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.001847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.001985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.002012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.002157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.002184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.002340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.002367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.002481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.002508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.002676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.002703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.002860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.002892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.003044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.003071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.003197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.003228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.003358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.003385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.003511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.003538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.003721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.003748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.003903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.003930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.004067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.004093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.004260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.004286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.004438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.004464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.004644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.004671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.004800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.004826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.004997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.005024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.005153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.005180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.354 qpair failed and we were unable to recover it. 00:24:58.354 [2024-07-15 13:17:20.005359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.354 [2024-07-15 13:17:20.005386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.005553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.005579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.005705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.005731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.005866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.005898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.006066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.006093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.006278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.006305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.006461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.006487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.006641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.006668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.006814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.006841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.007012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.007039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.007205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.007232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.007386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.007413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.007590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.007616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.007798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.007824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.007972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.008000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.008123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.008155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.008273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.008300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.008481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.008507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.008633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.008659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.008806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.008832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.008982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.009010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.009163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.009190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.009316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.009343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.009502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.009528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.009706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.009732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.009856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.009889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.010050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.010076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.010206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.010233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.010387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.010413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.010596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.010622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.010778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.010804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.010941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.010968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.011148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.011174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.011302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.011328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.011481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.011507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.011685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.011711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.011866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.011912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.012041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.012067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.012190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.012217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.012400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.012427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.012579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.012606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.355 [2024-07-15 13:17:20.012784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.355 [2024-07-15 13:17:20.012810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.355 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.012942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.012970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.013128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.013155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.013310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.013336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.013495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.013521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.013696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.013722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.013885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.013912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.014066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.014095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.014250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.014277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.356 [2024-07-15 13:17:20.014438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.356 [2024-07-15 13:17:20.014465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.356 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.014645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.014672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.014825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.014852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.015017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.015044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.015170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.015197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.015353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.015587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.015628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.015790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.015817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.015981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.016011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.016193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.016220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.016376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.016403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.016532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.016560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.016694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.016721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.016852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.016887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.017028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.017055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.017244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.017269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.017422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.017449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.017583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.017609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.017739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.017766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.017923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.017951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.018083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.018110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.018259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.018285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.018439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.018466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.639 qpair failed and we were unable to recover it. 00:24:58.639 [2024-07-15 13:17:20.018618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.639 [2024-07-15 13:17:20.018645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.018794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.018819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.018975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.019000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.019164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.019188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.019344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.019368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.019524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.019549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.019732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.019757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.019913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.019951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.020141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.020167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.020314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.020339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.020471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.020497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.020618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.020645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.023894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.023948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.024149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.024181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.024361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.024400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.024581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.024632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.024802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.024833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.024996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.025025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.025159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.025186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.025376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.025418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.025611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.025639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.025831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.025859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.026004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.026031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.026195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.026229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.026371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.026397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.026550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.026578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.026706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.026735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.026923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.026951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.027080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.027107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.027256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.027282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.027465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.027492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.027651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.027678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.027839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.027866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.028003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.028032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.028189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.028215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.028396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.028422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.028603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.028630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.028795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.028821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.028947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.028974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.029135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.029161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.029289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.640 [2024-07-15 13:17:20.029316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.640 qpair failed and we were unable to recover it. 00:24:58.640 [2024-07-15 13:17:20.029471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.029497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.029678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.029704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.029836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.029863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.030029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.030056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.030216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.030243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.030374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.030415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.030582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.030609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.030745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.030771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.030958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.030985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.031113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.031140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.031303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.031330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.031454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.031481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.031618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.031644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.031825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.031851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.032014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.032041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.032204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.032231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.032416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.032443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.032598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.032627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.032806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.032833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.033016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.033043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.033182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.033209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.033335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.033362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.033516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.033562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.033761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.033788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.033976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.034003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.034160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.034200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.034401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.034428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.034583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.034609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.034764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.034791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.034936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.034964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.035115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.035141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.035323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.035349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.035475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.035501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.035672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.035712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.035868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.035900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.036024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.036051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.036243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.036269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.036416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.036443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.036633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.036659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.036815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.641 [2024-07-15 13:17:20.036842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.641 qpair failed and we were unable to recover it. 00:24:58.641 [2024-07-15 13:17:20.037041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.037068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.037229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.037255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.037411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.037436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.037619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.037644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.037799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.037825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.037990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.038017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.038196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.038222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.038344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.038370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.038523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.038550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.038710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.038737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.038903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.038930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.039068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.039095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.039275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.039302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.039458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.039484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.039661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.039687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.039843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.039869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.040042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.040068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.040198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.040224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.040376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.040402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.040558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.040584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.040711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.040737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.040856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.040888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.041046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.041076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.041204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.041229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.041382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.041408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.041565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.041592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.041750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.041776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.041908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.041952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.042109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.042135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.042287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.042314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.042463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.042490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.042673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.042699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.042859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.042891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.043030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.043056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.043181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.043207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.043362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.043388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.043527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.043553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.043736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.043763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.043947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.043974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.044154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.044179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.642 [2024-07-15 13:17:20.044310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.642 [2024-07-15 13:17:20.044337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.642 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.044493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.044519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.044703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.044729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.044891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.044917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.045052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.045080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.045263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.045290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.045420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.045446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.045600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.045628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.045787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.045813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.045996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.046027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.046185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.046212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.046370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.046396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.046575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.046602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.046752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.046778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.046908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.046935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.047088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.047114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.047274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.047300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.047462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.047489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.047620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.047647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.047798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.047823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.047980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.048008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.048166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.048192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.048322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.048348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.048509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.048536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.048672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.048698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.048829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.048855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.049018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.049045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.049222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.049248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.049431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.049458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.049583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.049609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.049737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.049765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.049918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.049945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.050103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.050130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.643 [2024-07-15 13:17:20.050263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.643 [2024-07-15 13:17:20.050289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.643 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.050445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.050471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.050649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.050675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.050871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.050903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.051061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.051088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.051242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.051268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.051393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.051419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.051579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.051606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.051787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.051813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.051951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.051979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.052128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.052154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.052331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.052357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.052514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.052541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.052720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.052746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.052908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.052935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.053091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.053117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.053271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.053301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.053459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.053485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.053659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.053685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.053844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.053871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.054060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.054087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.054242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.054267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.054430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.054455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.054607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.054634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.054790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.054816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.054947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.054973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.055106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.055132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.055264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.055290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.055452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.055479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.055668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.055694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.055898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.055926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.056084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.056111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.056266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.056291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.056441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.056467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.056599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.056627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.056782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.056808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.056975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.057001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.057183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.057209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.057342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.057367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.057519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.057544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.057701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.057726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.644 qpair failed and we were unable to recover it. 00:24:58.644 [2024-07-15 13:17:20.057881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.644 [2024-07-15 13:17:20.057907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.058091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.058117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.058303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.058329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.058486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.058512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.058670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.058697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.058874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.058905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.059068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.059094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.059277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.059303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.059454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.059480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.059637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.059663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.059820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.059846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.060023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.060051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.060216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.060242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.060370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.060396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.060556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.060582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.060715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.060745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.060903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.060930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.061083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.061109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.061269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.061296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.061454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.061481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.061631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.061658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.061816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.061843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.062013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.062040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.062196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.062223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.062374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.062401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.062583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.062610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.062742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.062768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.062926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.062954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.063136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.063163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.063326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.063352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.063509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.063536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.063671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.063697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.063829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.063856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.064040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.064067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.064199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.064225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.064382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.064409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.064560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.064588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.064750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.064776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.064966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.064994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.065151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.065177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.065299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.645 [2024-07-15 13:17:20.065325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.645 qpair failed and we were unable to recover it. 00:24:58.645 [2024-07-15 13:17:20.065482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.065508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.065672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.065872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.065902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.066063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.066089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.066243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.066269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.066454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.066480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.066635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.066663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.066825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.066853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.067015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.067052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.067191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.067224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.067389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.067417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.067574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.067601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.067757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.067783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.067934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.067961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.068114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.068144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.068265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.068292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.068422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.068453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.068601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.068628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.068798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.068826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.069029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.069061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.069202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.069229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.069382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.069417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.069564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.069592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.069777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.069809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.069948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.069981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.070113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.070140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.070300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.070327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.070458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.070485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.070650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.070678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.070813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.070843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.071000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.071027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.071185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.071212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.071402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.071431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.071613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.071640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.071796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.071823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.072019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.072048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.072173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.072200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.072357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.072387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.072550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.072585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.072776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.072803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.646 [2024-07-15 13:17:20.072963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.646 [2024-07-15 13:17:20.072993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.646 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.073162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.073192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.073322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.073350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.073504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.073532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.073711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.073738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.073894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.073922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.074044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.074071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.074255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.074283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.074431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.074458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.074621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.074648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.074802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.074829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.075008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.075036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.075170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.075198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.075333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.075359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.075513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.075544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.075699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.075726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.075913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.075941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.076108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.076136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.076294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.076331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.076488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.076516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.076638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.076665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.076841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.076869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.077033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.077060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.077188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.077215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.077347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.077377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.077507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.077535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.077669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.077698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.077844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.077871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.078053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.078080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.078208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.078235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.078391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.078419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.078612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.078639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.078791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.078818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.078990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.079026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.079157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.079184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.079301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.079328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.079457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.079485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.079627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.079654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.079808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.079835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.647 [2024-07-15 13:17:20.079974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.647 [2024-07-15 13:17:20.080002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.647 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.080164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.080190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.080357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.080385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.080570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.080598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.080752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.080779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.080916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.080944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.081124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.081152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.081280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.081307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.081461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.081488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.081620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.081648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.081801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.081828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.081993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.082024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.082205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.082231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.082400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.082427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.082587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.082615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.082767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.082797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.082966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.082998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.083159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.083187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.083343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.083370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.083529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.083557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.083711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.083738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.083897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.083924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.084080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.084107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.084267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.084294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.084430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.084457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.084621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.084648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.084802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.084828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.085019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.085053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.085206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.085233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.085395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.085421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.085599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.085626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.085794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.085820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.085950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.085977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.086111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.086138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.086299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.086324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.086452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.086480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.086658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.648 [2024-07-15 13:17:20.086685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.648 qpair failed and we were unable to recover it. 00:24:58.648 [2024-07-15 13:17:20.086841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.086867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.087051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.087077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.087224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.087251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.087391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.087418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.087600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.087635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.087827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.087854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.087991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.088019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.088195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.088222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.088375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.088401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.088528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.088554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.088680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.088707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.088864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.088897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.089033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.089061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.089195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.089221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.089380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.089408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.089563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.089589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.089738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.089769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.089927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.089953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.090113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.090144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.090275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.090301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.090446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.090475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.090601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.090628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.090809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.090837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.090997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.091024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.091153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.091179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.091342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.091369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.091497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.091525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.091658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.091684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.091840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.091866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.092036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.092063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.092200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.092228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.092401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.092427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.092588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.092616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.092775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.092810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.092947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.092974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.093134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.093161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.093325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.093361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.093530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.093556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.093711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.093739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.093898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.093925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.649 qpair failed and we were unable to recover it. 00:24:58.649 [2024-07-15 13:17:20.094077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.649 [2024-07-15 13:17:20.094104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.094235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.094260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.094453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.094481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.094618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.094644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.094798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.094824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.094958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.094985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.095141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.095169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.095319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.095345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.095528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.095555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.095709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.095736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.095901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.095928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.096111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.096145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.096289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.096315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.096449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.096476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.096612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.096639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.096773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.096800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.096960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.096988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.097174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.097203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.097362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.097392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.097574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.097607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.097797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.097825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.097984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.098012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.098164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.098190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.098349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.098375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.098511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.098539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.098691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.098717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.098881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.098909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.099062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.099089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.099250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.099282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.099413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.099440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.099561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.099587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.099776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.099802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.099938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.099966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.100109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.100137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.100305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.100331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.100507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.100534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.100666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.100694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.100835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.100861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.101042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.101069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.101211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.101239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.650 qpair failed and we were unable to recover it. 00:24:58.650 [2024-07-15 13:17:20.101419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.650 [2024-07-15 13:17:20.101446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.101575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.101600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.101785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.101811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.101988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.102017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.102143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.102169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.102299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.102326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.102453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.102481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.102662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.102688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.102846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.102872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.103044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.103071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.103233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.103260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.103380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.103406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.103560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.103587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.103741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.103768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.103937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.103964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.104119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.104155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.104313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.104350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.104531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.104559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.104709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.104740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.104887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.104918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.105076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.105103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.105248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.105274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.105430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.105457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.105643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.105671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.105831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.105858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.106040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.106068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.106215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.106241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.106428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.106459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.106617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.106643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.106797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.106823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.106995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.107023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.107183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.107211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.107374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.107401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.107554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.107581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.107740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.107768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.107941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.107968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.108120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.108156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.108315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.108343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.108496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.108523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.108674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.651 [2024-07-15 13:17:20.108701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.651 qpair failed and we were unable to recover it. 00:24:58.651 [2024-07-15 13:17:20.108859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.108890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.109061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.109088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.109252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.109280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.109460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.109495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.109675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.109702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.109885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.109922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.110073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.110099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.110230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.110266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.110405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.110430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.110577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.110603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.110757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.110783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.110953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.110980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.111106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.111131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.111305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.111334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.111489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.111516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.111645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.111672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.111798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.111824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.112010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.112038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.112165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.112212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.112400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.112426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.112573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.112600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.112764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.112792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.112934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.112961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.113088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.113116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.113310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.113344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.113470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.113496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.113670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.113699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.113852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.113884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.114043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.114070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.114213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.114240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.114427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.114462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.114619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.114646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.114806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.114833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.114991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.115017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.652 [2024-07-15 13:17:20.115194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.652 [2024-07-15 13:17:20.115222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.652 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.115353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.115387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.115552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.115578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.115740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.115767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.115943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.115971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.116094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.116131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.116285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.116312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.116450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.116478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.116660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.116686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.116841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.116868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.117036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.117067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.117240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.117267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.117402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.117429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.117588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.117621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.117787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.117814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.117983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.118014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.118161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.118188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.118347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.118373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.118528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.118555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.118699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.118736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.118900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.118946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.119087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.119113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.119278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.119305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.119450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.119478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.119600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.119631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.119826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.119852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.120030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14000e0 is same with the state(5) to be set 00:24:58.653 [2024-07-15 13:17:20.120225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.120267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.120435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.120464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.120645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.120672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.120794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.120821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.120957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.120984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.121117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.121144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.121302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.121329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.121511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.121538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.121727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.121754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.121920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.121947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.122099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.122127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.122280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.122313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.122447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.122475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.122634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.122661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.122819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.653 [2024-07-15 13:17:20.122846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.653 qpair failed and we were unable to recover it. 00:24:58.653 [2024-07-15 13:17:20.122978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.123006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.123173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.123200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.123357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.123385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.123520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.123547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.123702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.123730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.123919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.123948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.124078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.124105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.124342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.124369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.124522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.124550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.124730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.124757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.124924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.124952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.125111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.125138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.125293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.125320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.125472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.125499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.125654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.125681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.125842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.125869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.126028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.126056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.126204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.126231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.126375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.126402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.126532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.126559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.126747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.126774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.126932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.126960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.127117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.127144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.127308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.127336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.127510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.127537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.127697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.127724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.127907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.127935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.128067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.128094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.128250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.128276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.128511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.128538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.128716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.128743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.128906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.128934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.129074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.129101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.129257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.129284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.129411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.129438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.129596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.129623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.129780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.129812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.129968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.129995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.130137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.130164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.130316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.130343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.654 [2024-07-15 13:17:20.130496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.654 [2024-07-15 13:17:20.130523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.654 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.130677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.130704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.130886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.130913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.131047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.131074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.131234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.131261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.131413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.131440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.131588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.131615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.131853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.131889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.132075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.132102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.132250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.132277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.132515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.132543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.132686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.132713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.132870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.132904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.133041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.133068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.133197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.133224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.133380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.133408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.133563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.133590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.133770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.133796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.133948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.133976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.134103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.134129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.134287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.134314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.134546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.134573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.134728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.134754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.134900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.134928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.135084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.135111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.135231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.135258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.135415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.135442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.135596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.135623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.135773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.135800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.135954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.135981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.136115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.136143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.136301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.136328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.136453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.136479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.136629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.136655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.136825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.136852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.137032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.137059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.137211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.137242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.137396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.137423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.137559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.137585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.137704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.137731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.137860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.137892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.655 [2024-07-15 13:17:20.138052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.655 [2024-07-15 13:17:20.138079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.655 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.138263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.138290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.138434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.138461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.138615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.138643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.138881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.138909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.139045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.139072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.139224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.139251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.139382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.139409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.139565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.139592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.139727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.139754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.139936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.139963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.140096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.140123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.140280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.140307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.140489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.140516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.140650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.140679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.140835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.140862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.141006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.141033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.141184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.141211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.141368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.141550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.141576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.141728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.141755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.141905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.141932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.142090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.142117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.142273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.142301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.142535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.142563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.142717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.142744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.142899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.142927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.143080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.143107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.143265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.143292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.143446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.143473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.143631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.143658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.143788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.143816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.144002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.144030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.144209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.144236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.144368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.144395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.656 [2024-07-15 13:17:20.144544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.656 [2024-07-15 13:17:20.144575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.656 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.144730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.144756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.144903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.144931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.145084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.145111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.145345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.145381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.145540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.145567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.145688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.145715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.145837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.145865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.146027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.146055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.146177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.146205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.146325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.146353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.146511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.146538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.146695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.146721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.146882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.146910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.147099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.147127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.147250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.147278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.147404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.147431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.147566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.147594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.147750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.147778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.147948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.147976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.148129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.148156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.148284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.148310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.148471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.148498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.148681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.148708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.148864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.148897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.149055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.149082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.149242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.149269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.149404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.149432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.149592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.149619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.149772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.149798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.149931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.149958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.150143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.150170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.150309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.150336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.150520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.150547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.150696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.150723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.150871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.150905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.151061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.151088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.151245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.151274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.151402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.151430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.151591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.151618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.151798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.151829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.657 [2024-07-15 13:17:20.151980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.657 [2024-07-15 13:17:20.152008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.657 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.152166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.152193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.152347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.152374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.152507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.152535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.152708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.152735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.152924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.152951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.153085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.153111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.153266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.153293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.153449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.153477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.153660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.153687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.153814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.153842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.154003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.154030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.154160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.154187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.154355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.154384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.154568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.154595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.154748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.154775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.154939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.154967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.155092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.155120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.155299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.155326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.155474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.155501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.155661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.155687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.155857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.155890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.156046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.156073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.156226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.156253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.156399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.156426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.156602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.156628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.156771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.156805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.156968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.156997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.157154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.157180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.157338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.157365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.157495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.157522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.157702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.157728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.157914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.157941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.158084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.158111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.158295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.158322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.158454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.158481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.158615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.158642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.158797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.158824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.159001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.159029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.159181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.159212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.658 qpair failed and we were unable to recover it. 00:24:58.658 [2024-07-15 13:17:20.159368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.658 [2024-07-15 13:17:20.159395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.159576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.159602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.159787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.159814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.159966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.159994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.160152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.160330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.160357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.160518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.160544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.160697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.160724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.160903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.160931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.161063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.161090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.161245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.161271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.161420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.161447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.161573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.161599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.161726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.161754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.161900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.161927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.162085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.162113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.162274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.162301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.162431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.162458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.162584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.162611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.162760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.162786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.162941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.162968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.163122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.163148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.163305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.163332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.166029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.166072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.166218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.166247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.166426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.166453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.166640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.166668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.166825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.166854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.167021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.167048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.167179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.167207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.167366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.659 [2024-07-15 13:17:20.167394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.659 qpair failed and we were unable to recover it. 00:24:58.659 [2024-07-15 13:17:20.167546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.167574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.167755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.167783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.167934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.167961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.168128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.168155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.168336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.168363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.168485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.168514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.168642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.168670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.168853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.168889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.169018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.169050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.169178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.169207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.169389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.169417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.169572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.169599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.169755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.169782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.169960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.169988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.170120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.170147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.170279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.170308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.170438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.170465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.170640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.170667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.170798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.170826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.171014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.171042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.171168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.171196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.171366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.171393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.171528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.171555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.171715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.171742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.171892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.171919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.172065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.172092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.172240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.172267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.172400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.172427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.172606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.172633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.172788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.172816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.172948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.172976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.173169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.173196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.173351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.173379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.173538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.173567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.173752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.173779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.173922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.173950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.174082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.174109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.174238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.174265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.174421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.174448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.174599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.174627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.660 [2024-07-15 13:17:20.174781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.660 [2024-07-15 13:17:20.174808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.660 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.174961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.174989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.175142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.175169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.175328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.175356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.175516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.175543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.175697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.175724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.175909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.175936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.176067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.176094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.176216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.176247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.176402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.176429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.176564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.176591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.176761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.176788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.176941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.176970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.177127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.177154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.177308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.177335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.177489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.177517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.177674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.177707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.177897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.177925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.178057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.178085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.178243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.178270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.178459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.178486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.178621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.178650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.178788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.178816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.178978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.179007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.179142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.179170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.179306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.179333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.179479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.179507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.179677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.179704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.661 [2024-07-15 13:17:20.179830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.661 [2024-07-15 13:17:20.179858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.661 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.180023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.180050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.180218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.180245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.180405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.180432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.180587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.180613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.180761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.180789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.180957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.180985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.181131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.181171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.181335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.181365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.181545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.181580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.181736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.181763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.181910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.181938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.182127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.182154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.182308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.182344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.182503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.182529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.182665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.182692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.182851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.182884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.183036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.183062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.183215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.183242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.183429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.183457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.183591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.183622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.183783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.183814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.183957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.183984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.184112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.184138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.184323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.184352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.184512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.184539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.184669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.184695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.184843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.184869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.185021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.185048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.185205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.185232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.185388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.185415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.185566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.185591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.185757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.185783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.185939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.185966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.186109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.186149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.186339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.186367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.186524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.186552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.662 [2024-07-15 13:17:20.186683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.662 [2024-07-15 13:17:20.186710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.662 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.186863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.186896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.187027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.187055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.187210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.187237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.187419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.187446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.187598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.187625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.187788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.187817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.187972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.188000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.188155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.188182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.188338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.188364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.188550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.188577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.188700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.188727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.188896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.188938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.189125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.189154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.189294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.189321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.189472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.189499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.189678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.189705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.189827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.189854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.190012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.190040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.190196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.190222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.190381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.190408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.190561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.190587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.190745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.190772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.190919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.190947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.191104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.191133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.191296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.191323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.191450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.191478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.191665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.191692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.191822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.191849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.192014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.192041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.192221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.192248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.192406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.192434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.192591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.192618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.192804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.192833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.192998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.193026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.663 qpair failed and we were unable to recover it. 00:24:58.663 [2024-07-15 13:17:20.193183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.663 [2024-07-15 13:17:20.193210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.193366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.193392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.193580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.193606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.193761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.193790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.193944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.193971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.194154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.194180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.194360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.194387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.194541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.194568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.194725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.194751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.194930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.194957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.195139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.195166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.195316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.195343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.195523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.195549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.195696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.195722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.195874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.195912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.196067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.196097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.196251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.196278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.196429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.196455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.196606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.196633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.196818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.196844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.197033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.197061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.197211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.197238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.197392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.197418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.197544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.197571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.197762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.197789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.197946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.197973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.198164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.198190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.198374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.198400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.198550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.198577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.198760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.198787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.198927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.198955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.199115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.199142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.199293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.199320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.199452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.199480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.199663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.199690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.199811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.199837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.664 [2024-07-15 13:17:20.200000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.664 [2024-07-15 13:17:20.200027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.664 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.200212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.200239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.200398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.200425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.200550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.200576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.200696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.200723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.200885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.200912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.201045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.201073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.201265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.201292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.201441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.201467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.201625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.201652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.201816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.201858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.202000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.202029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.202163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.202191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.202321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.202349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.202504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.202531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.202687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.202715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.202899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.202927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.203075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.203103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.203239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.203268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.203428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.203461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.203587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.203614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.203770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.203798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.203953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.203980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.204111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.204138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.204315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.204341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.204472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.204499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.204653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.204679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.204809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.204835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.204976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.205003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.205161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.205188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.205310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.205336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.205511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.205538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.205698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.205724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.205888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.205915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.206040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.206066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.206186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.206213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.206392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.206419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.206601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.206628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.665 qpair failed and we were unable to recover it. 00:24:58.665 [2024-07-15 13:17:20.206783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.665 [2024-07-15 13:17:20.206809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.206944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.206972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.207160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.207201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.207394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.207422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.207577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.207605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.207760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.207787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.207972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.208000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.208154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.208181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.208345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.208372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.208531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.208559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.208735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.208762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.208948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.208977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.209103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.209130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.209287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.209313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.209496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.209523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.209675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.209702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.209861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.209893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.210065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.210092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.210245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.210272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.210424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.210451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.210602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.210631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.210785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.210816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.210970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.210997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.211179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.211206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.211386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.211413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.211582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.666 [2024-07-15 13:17:20.211609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.666 qpair failed and we were unable to recover it. 00:24:58.666 [2024-07-15 13:17:20.211739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.211766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.211936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.211965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.212143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.212170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.212322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.212349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.212525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.212552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.212709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.212736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.212886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.212914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.213071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.213098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.213278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.213305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.213468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.213495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.213629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.213658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.213815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.213842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.214016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.214045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.214228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.214254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.214416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.214443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.214570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.214596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.214730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.214758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.214943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.214971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.215124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.215150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.215309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.215336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.215507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.215534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.215690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.215718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.215885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.215914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.216044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.216072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.216192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.216219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.216401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.216428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.216559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.216588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.216742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.216769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.216952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.216979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.217111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.217138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.217293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.217320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.217449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.217476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.217623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.217651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.217828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.217855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.218017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.218045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.218197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.218228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.218359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.218386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.218548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.218576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.218756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.218783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.218964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.218992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.667 [2024-07-15 13:17:20.219130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.667 [2024-07-15 13:17:20.219158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.667 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.219336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.219363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.219544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.219571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.219701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.219729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.219913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.220090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.220117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.220272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.220299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.220461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.220488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.220619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.220647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.220836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.220863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.221034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.221061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.221241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.221268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.221451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.221479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.221628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.221655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.221815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.221843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.221980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.222008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.222191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.222219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.222400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.222427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.222582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.222609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.222740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.222767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.222959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.222990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.223114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.223141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.223327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.223354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.223486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.223513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.223675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.223701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.223885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.223912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.224070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.224097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.224245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.224272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.224391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.224418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.224605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.224633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.224758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.224784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.224970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.224998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.225125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.225152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.225329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.225356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.225511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.225538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.225663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.225695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.225832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.225859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.226025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.226051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.226172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.226199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.226352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.226379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.668 [2024-07-15 13:17:20.226561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.668 [2024-07-15 13:17:20.226587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.668 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.226758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.226787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.226939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.226967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.227146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.227174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.227323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.227351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.227535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.227563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.227710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.227737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.227928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.227956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.228115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.228142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.228302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.228328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.228490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.228516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.228672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.228700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.228832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.228858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.229024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.229051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.229212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.229239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.229422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.229449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.229606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.229632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.229792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.229820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.229974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.230001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.230159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.230185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.230364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.230391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.230572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.230598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.230751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.230777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.230929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.230957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.231112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.231139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.231292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.231319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.231441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.231469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.231598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.231624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.231793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.231834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.231980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.232010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.232196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.232224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.232359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.232387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.232569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.232597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.232726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.232753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.232935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.232962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.233098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.233139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.233304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.233332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.233486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.233514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.233695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.233722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.233904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.233932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.234061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.234089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.669 [2024-07-15 13:17:20.234240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.669 [2024-07-15 13:17:20.234268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.669 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.234432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.234460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.234590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.234619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.234802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.234829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.234963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.234992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.235142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.235169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.235349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.235377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.235507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.235535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.235724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.235751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.235932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.235961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.236140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.236167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.236319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.236346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.236530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.236557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.236711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.236738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.236869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.236904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.237034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.237060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.237250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.237277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.237456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.237483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.237601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.237628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.237806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.237833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.237994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.238021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.238186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.238213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.238343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.238370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.238493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.238519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.238676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.238704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.238858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.238890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.239024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.239052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.239173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.239200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.239363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.239390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.239541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.239567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.239722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.239749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.239900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.239928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.240110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.240137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.670 [2024-07-15 13:17:20.240318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.670 [2024-07-15 13:17:20.240344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.670 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.240526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.240557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.240715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.240742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.240868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.240899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.241083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.241110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.241265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.241292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.241453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.241479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.241636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.241662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.241859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.241906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.242040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.242068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.242226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.242254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.242415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.242443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.242608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.242635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.242793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.242820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.242978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.243006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.243169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.243197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.243360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.243388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.243568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.243595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.243750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.243777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.243938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.243966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.244148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.244175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.244356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.244383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.244543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.244570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.244748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.244776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.244956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.244984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.245135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.245163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.245319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.245347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.245502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.245529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.245685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.245712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.245869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.245903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.246055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.246082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.246209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.246237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.246422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.246452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.246609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.246636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.246821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.246847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.247020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.247046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.247202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.247228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.247415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.247441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.247600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.247626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.247807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.247834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.248003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.248030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.671 qpair failed and we were unable to recover it. 00:24:58.671 [2024-07-15 13:17:20.248151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.671 [2024-07-15 13:17:20.248182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.248339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.248366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.248520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.248547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.248726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.248753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.248935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.248973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.249107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.249133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.249316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.249343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.249469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.249495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.249626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.249654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.249836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.249863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.250020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.250047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.250173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.250199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.250329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.250355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.250481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.250508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.250701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.250728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.250850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.250882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.251041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.251067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.251212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.251239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.251366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.251393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.251552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.251579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.251701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.251728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.251852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.251884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.252037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.252064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.252217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.252245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.252400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.252427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.252581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.252608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.252787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.252814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.252975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.253016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.253205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.253233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.253387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.253415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.253539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.253566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.253726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.253753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.253905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.253933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.254058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.254086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.254263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.254290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.254445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.254472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.254635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.254662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.254818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.254846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.255013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.255042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.255229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.255256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.255430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.255463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.672 qpair failed and we were unable to recover it. 00:24:58.672 [2024-07-15 13:17:20.255616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.672 [2024-07-15 13:17:20.255643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.255820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.255848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.256012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.256040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.256186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.256213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.256370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.256398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.256552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.256579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.256742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.256769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.256930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.256959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.257141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.257168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.257298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.257326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.257477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.257504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.257658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.257685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.257861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.257893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.258059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.258086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.258235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.258262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.258444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.258470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.258649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.258676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.258856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.258888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.259069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.259096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.259275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.259302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.259458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.259485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.259669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.259696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.259857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.259892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.260051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.260079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.260236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.260264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.260423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.260451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.260634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.260661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.260810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.260838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.260972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.260999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.261154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.261181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.261339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.261367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.261500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.261528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.261649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.261676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.261823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.261849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.261983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.262010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.262164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.262191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.262321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.262348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.262467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.262493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.262649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.262677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.262862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.262900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.263053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.673 [2024-07-15 13:17:20.263080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.673 qpair failed and we were unable to recover it. 00:24:58.673 [2024-07-15 13:17:20.263231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.263257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.263404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.263430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.263580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.263606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.263759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.263786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.263938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.263965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.264082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.264109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.264263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.264290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.264442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.264469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.264630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.264657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.264814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.264841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.265003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.265030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.265209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.265235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.265366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.265394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.265553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.265579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.265761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.265788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.265961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.265988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.266142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.266170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.266297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.266324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.266506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.266533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.266683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.266709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.266836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.266862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.267019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.267046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.267231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.267257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.267410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.267437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.267591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.267618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.267749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.267777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.267960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.267987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.268116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.268143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.268325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.268352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.268499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.268527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.268688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.268716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.268898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.268926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.269059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.269086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.269242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.269269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.269389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.674 [2024-07-15 13:17:20.269415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.674 qpair failed and we were unable to recover it. 00:24:58.674 [2024-07-15 13:17:20.269594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.269620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.269764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.269790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.269945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.269971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.270127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.270158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.270274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.270301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.270491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.270517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.270696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.270723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.270856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.270888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.271072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.271099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.271247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.271274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.271428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.271455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.271609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.271636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.271771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.271797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.271977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.272005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.272156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.272183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.272345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.272372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.272515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.272541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.272734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.272761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.272942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.272969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.273101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.273129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.273284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.273312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.273471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.273498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.273621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.273648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.273822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.273848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.274005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.274032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.274186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.274213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.274367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.274393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.274522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.274548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.274695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.274722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.274906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.274933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.275099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.275126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.275281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.275307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.275457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.275484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.275661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.275687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.275845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.275872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.276040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.276067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.276218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.276245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.276426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.276453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.276587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.276614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.276794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.675 [2024-07-15 13:17:20.276821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.675 qpair failed and we were unable to recover it. 00:24:58.675 [2024-07-15 13:17:20.276971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.276998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.277148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.277175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.277327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.277354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.277486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.277515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.277668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.277695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.277834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.277861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.278005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.278032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.278178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.278204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.278335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.278362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.278483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.278509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.278694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.278721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.278886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.278914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.279063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.279089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.279240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.279267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.279425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.279451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.279628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.279654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.279835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.279862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.280025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.280051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.280176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.280203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.280392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.280419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.280572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.280598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.280756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.280783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.280938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.280965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.281095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.281122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.281313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.281340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.281474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.281501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.281637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.281664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.281846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.281873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.282034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.282061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.282216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.282243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.282437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.282464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.282621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.282648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.282768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.282795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.282951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.282978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.283127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.283154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.283277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.283304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.283462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.283488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.283613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.283640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.283794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.283820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.283945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.283973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.284132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-15 13:17:20.284159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.676 qpair failed and we were unable to recover it. 00:24:58.676 [2024-07-15 13:17:20.284284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.284311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.284494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.284521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.284648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.284675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.284835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.284863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.284999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.285026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.285204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.285231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.285386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.285413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.285561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.285588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.285747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.285774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.285934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.285961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.286121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.286147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.286299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.286325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.286504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.286531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.286714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.286741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.286902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.286929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.287115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.287142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.287307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.287334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.287487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.287513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.287664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.287690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.287837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.287864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.288020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.288047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.288229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.288255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.288413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.288439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.288591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.288618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.288772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.288798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.288954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.288981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.289098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.289125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.289314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.289341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.289497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.289523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.289705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.289735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.289893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.289920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.290079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.290105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.290285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.290312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.290460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.290488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.290646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.290673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.290852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.290884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.291066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.291092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.291222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.291249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.291407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.291434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.291611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.291637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.677 [2024-07-15 13:17:20.291765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.677 [2024-07-15 13:17:20.291792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.677 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.291971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.291998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.292181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.292207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.292339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.292366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.292522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.292549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.292709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.292736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.292894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.292922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.293072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.293098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.293250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.293277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.293425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.293452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.293631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.293658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.293792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.293820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.294000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.294028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.294210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.294237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.294418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.294445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.294604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.294631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.294815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.294841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.295001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.295028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.295182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.295209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.295333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.295360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.295540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.295567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.295690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.295717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.295869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.295901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.296034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.296061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.296242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.296268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.296419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.296446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.296605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.296632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.296783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.296810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.296994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.297021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.297178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.297208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.297365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.297392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.297545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.297572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.297729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.297757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.297940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.297968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.298148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.298174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.298307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.298334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.298492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.298519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.298650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.298677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.298830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.298857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.299012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.299038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.299199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.299225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.678 qpair failed and we were unable to recover it. 00:24:58.678 [2024-07-15 13:17:20.299362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.678 [2024-07-15 13:17:20.299389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.299544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.299571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.299731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.299757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.299934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.299961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.300116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.300142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.300270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.300296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.300425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.300452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.300605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.300633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.300813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.300840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.301027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.301053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.301212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.301238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.301366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.301393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.301540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.301567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.301736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.301762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.301914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.301941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.302108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.302135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.302292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.302319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.302469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.302496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.302678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.302704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.302888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.302915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.303066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.303093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.303244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.303271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.303457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.303483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.303636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.303663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.303796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.303823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.303951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.303979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.304153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.304179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.304362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.304388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.304567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.304598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.304721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.304748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.304908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.304935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.305091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.305119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.305244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.305271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.305450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.305477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.305659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.679 [2024-07-15 13:17:20.305686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.679 qpair failed and we were unable to recover it. 00:24:58.679 [2024-07-15 13:17:20.305830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.305858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.306032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.306060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.306219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.306246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.306430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.306457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.306612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.306638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.306765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.306791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.306921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.306948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.307109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.307144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.307299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.307325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.307448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.307474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.307630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.307656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.307777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.307804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.307954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.307981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.308159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.308186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.308351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.308378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.308528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.308555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.308706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.308733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.308865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.308898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.309079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.309105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.309229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.309255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.309390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.309416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.309541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.309567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.309719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.309745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.309896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.309923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.310083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.310109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.310291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.310318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.310476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.310502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.310654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.310680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.310808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.310836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.310981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.311008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.311162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.311189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.311369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.311396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.311545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.311572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.311724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.311757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.311886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.311913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.312073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.312100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.312248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.312275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.312433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.312460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.312610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.312637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.312761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.312788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.312965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.312993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.313135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.313162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.680 qpair failed and we were unable to recover it. 00:24:58.680 [2024-07-15 13:17:20.313343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.680 [2024-07-15 13:17:20.313370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.681 qpair failed and we were unable to recover it. 00:24:58.681 [2024-07-15 13:17:20.313546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.681 [2024-07-15 13:17:20.313573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.681 qpair failed and we were unable to recover it. 00:24:58.681 [2024-07-15 13:17:20.313703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.681 [2024-07-15 13:17:20.313730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.681 qpair failed and we were unable to recover it. 00:24:58.681 [2024-07-15 13:17:20.313861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.681 [2024-07-15 13:17:20.313894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.681 qpair failed and we were unable to recover it. 00:24:58.681 [2024-07-15 13:17:20.314052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.681 [2024-07-15 13:17:20.314078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.681 qpair failed and we were unable to recover it. 00:24:58.681 [2024-07-15 13:17:20.314208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.681 [2024-07-15 13:17:20.314235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.681 qpair failed and we were unable to recover it. 00:24:58.681 [2024-07-15 13:17:20.314418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.314444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.314578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.314605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.314726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.314754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.314949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.314976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.315100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.315127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.315306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.315333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.315511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.315538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.315719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.315746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.315903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.966 [2024-07-15 13:17:20.315928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.966 qpair failed and we were unable to recover it. 00:24:58.966 [2024-07-15 13:17:20.316095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.316122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.316272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.316298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.316455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.316481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.316608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.316635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.316792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.316819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.316956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.316985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.317150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.317178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.317331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.317358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.317490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.317518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.317688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.317715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.317866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.317905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.318042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.318068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.318190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.318219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.318403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.318431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.318560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.318586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.318743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.318773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.318944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.318976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.319121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.319150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.319307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.319334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.319516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.319543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.319673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.319701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.319863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.319896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.320048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.320076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.320208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.320235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.320395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.320423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.320581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.320614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.320740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.320767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.320903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.320932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.321091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.321118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.321270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.321299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.321451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.321480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.321630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.321657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.321793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.321821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.321979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.322008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.322164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.322192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.322334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.322361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.322518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.322545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.322680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.322711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.322833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.322864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.323033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.323060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.323239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.323266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.323447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.323475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.323606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.323633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.323761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.323787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.323933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.323966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.324096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.324124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.324279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.324305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.324447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.324474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.324624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.324661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.324823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.324850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.325022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.325050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.325198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.325224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.325364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.325392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.325554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.325581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.325761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.325787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.325941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.325970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.326101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.326131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.326268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.326295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.326444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.326481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.326642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.326669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.967 qpair failed and we were unable to recover it. 00:24:58.967 [2024-07-15 13:17:20.326827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.967 [2024-07-15 13:17:20.326853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.327017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.327045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.327213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.327241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.327401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.327428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.327551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.327577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.327738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.327766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.327916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.327944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.328097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.328123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.328270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.328298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.328434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.328462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.328625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.328653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.328799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.328826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.328958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.328986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.329113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.329140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.329299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.329326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.329480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.329514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.329663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.329690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.329839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.329873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.330078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.330105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.330252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.330278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.330450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.330485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.330681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.330707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.330829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.330856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.331027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.331055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.331207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.331234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.331388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.331414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.331598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.331632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.331789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.331816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.331974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.332002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.332158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.332185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.332371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.332398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.332554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.332580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.332763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.332789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.332925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.332952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.333125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.333152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.333291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.333318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.333495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.333525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.333704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.333731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.333852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.333884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.334052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.334085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.334244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.334428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.334466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.334621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.334647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.334831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.334859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.335003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.335030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.335179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.335207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.335361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.335388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.335519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.335547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.335707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.335734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.335898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.335926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.336088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.336114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.336286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.336314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.336461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.336488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.336636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.336662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.336784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.336811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.336990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.337018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.337141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.337168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.337297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.337325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.337501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.337528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.337689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.337717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.337870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.337902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.338021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.338047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.968 [2024-07-15 13:17:20.338201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.968 [2024-07-15 13:17:20.338228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.968 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.338408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.338435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.338570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.338596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.338746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.338773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.338907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.338936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.339091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.339118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.339298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.339325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.339477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.339505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.339687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.339714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.339883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.339910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.340063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.340090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.340252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.340279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.340411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.340438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.340617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.340643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.340830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.340861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.341028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.341055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.341209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.341235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.341363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.341390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.341522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.341549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.341669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.341696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.341861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.341906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.342040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.342066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.342187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.342214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.342393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.342419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.342576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.342604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.342785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.342812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.342992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.343019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.343174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.343202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.343376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.343404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.343563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.343590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.343734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.343761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.343892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.343921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.344080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.344107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.344239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.344266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.344449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.344477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.344611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.344640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.344794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.344822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.345001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.345028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.345151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.345178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.345342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.345370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.345524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.345550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.345702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.345729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.345855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.345889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.346075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.346101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.346287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.346314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.346467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.346494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.346650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.346677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.346830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.346857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.347014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.347040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.347186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.347214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.969 [2024-07-15 13:17:20.347377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.969 [2024-07-15 13:17:20.347404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.969 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.347530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.347556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.347710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.347736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.347890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.347917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.348047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.348080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.348204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.348231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.348386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.348413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.348570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.348597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.348751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.348777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.348912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.348941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.349099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.349125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.349287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.349320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.349464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.349491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.349647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.349674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.349829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.349856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.350010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.350038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.350209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.350236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.350389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.350416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.350558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.350586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.350735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.350761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.350919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.350947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.351083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.351110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.351265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.351293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.351475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.351511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.351699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.351725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.351886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.351913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.352067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.352093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.352275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.352460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.352486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.352621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.352647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.352822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.352855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.353031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.353059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.353185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.353212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.353364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.353398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.353535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.353562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.353700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.353727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.353891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.353922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.354098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.354125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.354248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.354275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.354412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.354439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.354621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.354648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.354786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.354813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.354952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.354980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.355137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.355164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.355313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.355345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.355501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.355530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.355685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.355712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.355866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.355900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.356025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.356062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.356235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.356262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.356395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.356422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.356567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.356594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.356783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.356810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.356967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.356994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.357144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.357170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.357337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.357363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.357493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.357521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.357644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.357672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.357846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.357872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.358023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.970 [2024-07-15 13:17:20.358055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.970 qpair failed and we were unable to recover it. 00:24:58.970 [2024-07-15 13:17:20.358196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.358224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.358378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.358404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.358559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.358586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.358767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.358795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.358942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.358970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.359128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.359154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.359345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.359373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.359521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.359547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.359697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.359724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.359886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.359914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.360051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.360078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.360236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.360263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.360411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.360438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.360607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.360635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.360793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.360819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.360945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.360972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.361132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.361160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.361311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.361339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.361495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.361522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.361653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.361681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.361834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.361861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.362029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.362056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.362179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.362208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.362363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.362390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.362545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.362576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.362761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.362788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.362948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.362975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.363136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.363162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.363320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.363348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.363530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.363557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.363712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.363739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.363895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.363931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.364094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.364130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.364281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.364308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.364490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.364524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.364665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.364692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.364859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.364891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.365073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.365110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.365249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.365279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.365433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.365460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.365612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.365638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.365798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.365826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.365952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.365979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.366143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.366169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.366299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.366327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.366478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.366505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.366690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.366717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.366890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.366919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.367089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.367123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.367309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.367335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.367491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.367518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.367661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.367689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.367870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.367902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.368069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.368097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.368227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.368260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.368405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.368433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.368616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.368644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.368798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.368825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.368963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.368990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.369145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.369171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.369325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.971 [2024-07-15 13:17:20.369351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.971 qpair failed and we were unable to recover it. 00:24:58.971 [2024-07-15 13:17:20.369482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.369509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.369668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.369696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.369858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.369891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.370050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.370081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.370263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.370291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.370444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.370471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.370632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.370667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.370808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.370834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.370995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.371023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.371178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.371206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.371372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.371400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.371555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.371583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.371740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.371777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.371946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.371974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.372132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.372165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.372297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.372325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.372487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.372515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.372667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.372694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.372852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.372889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.373020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.373047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.373172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.373200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.373332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.373359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.373493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.373520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.373674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.373700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.373853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.373894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.374048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.374076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.374231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.374259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.374415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.374442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.374560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.374587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.374752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.374781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.374964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.374994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.375154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.375181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.375320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.375347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.375489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.375515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.375704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.375733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.375890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.375918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.376071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.376099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.376250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.376277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.376445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.376472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.376629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.376655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.376837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.376864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.377049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.377077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.377204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.377230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.377419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.377447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.377604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.377631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.377815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.377841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.377979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.378007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.378134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.378162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.378318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.378345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.378474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.378501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.972 qpair failed and we were unable to recover it. 00:24:58.972 [2024-07-15 13:17:20.378677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.972 [2024-07-15 13:17:20.378704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.378890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.378918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.379075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.379102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.379257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.379285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.379408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.379435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.379586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.379614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.379775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.379801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.379964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.380000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.380137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.380164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.380316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.380342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.380500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.380528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.380684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.380717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.380886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.380913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.381050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.381077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.381203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.381230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.381384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.381412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.381596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.381623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.381757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.381783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.381943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.381971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.382135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.382165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.382289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.382320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.382480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.382506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.382632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.382659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.382830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.382857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.383024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.383054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.383222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.383249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.383440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.383466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.383621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.383649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.383807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.383834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.383999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.384027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.384159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.384186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.384336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.384363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.384549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.384577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.384739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.384766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.384925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.384954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.385091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.385118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.385280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.385308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.385462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.385489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.385647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.385674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.385815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.385843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.386009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.386036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.386188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.386214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.386339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.386367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.386544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.386571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.386751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.386778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.386927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.386956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.387077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.387104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.387288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.387315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.387444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.387471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.387626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.387652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.387799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.387826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.387948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.387975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.388126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.388153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.388311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.388338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.388488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.388515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.388676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.388702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.388851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.388884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.389038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.389065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.389185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.389211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.389393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.973 [2024-07-15 13:17:20.389419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.973 qpair failed and we were unable to recover it. 00:24:58.973 [2024-07-15 13:17:20.389568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.389599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.389725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.389752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.389904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.389931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.390110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.390136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.390289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.390316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.390503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.390530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.390713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.390739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.390894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.390921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.391054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.391081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.391263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.391289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.391421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.391448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.391604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.391631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.391784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.391811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.391993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.392020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.392155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.392182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.392330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.392357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.392514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.392541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.392709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.392736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.392890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.392917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.393096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.393123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.393271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.393298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.393447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.393473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.393650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.393677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.393805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.393831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.393992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.394019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.394153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.394180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.394341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.394367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.394542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.394569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.394700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.394726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.394891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.394920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.395088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.395115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.395298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.395325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.395484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.395510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.395667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.395695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.395847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.395874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.396032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.396059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.396234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.396260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.396413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.396439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.396566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.396594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.396732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.396758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.396938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.396969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.397098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.397126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.397284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.397311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.397464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.397491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.397614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.397641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.397806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.397833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.397997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.398024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.398187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.398213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.398391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.398418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.398543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.398570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.398718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.398745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.398922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.398949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.399078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.399105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.399285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.399312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.399444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.399470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.399643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.399669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.399849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.399880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.400008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.400035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.400203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.400230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.400356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.400383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.400525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.400551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.974 [2024-07-15 13:17:20.400708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.974 [2024-07-15 13:17:20.400734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.974 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.400859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.400892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.401021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.401048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.401231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.401257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.401415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.401443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.401571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.401598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.401728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.401756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.401907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.401934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.402100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.402127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.402258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.402285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.402438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.402465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.402619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.402646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.402804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.402831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.402993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.403020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.403171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.403197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.403321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.403348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.403524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.403551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.403694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.403720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.403885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.403912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.404096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.404127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.404285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.404313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.404472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.404499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.404647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.404674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.404800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.404827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.405013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.405040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.405194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.405221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.405346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.405373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.405503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.405531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.405689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.405716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.405871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.405905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.406062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.406089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.406247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.406273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.406427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.406454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.406616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.406643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.406766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.406793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.406950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.406977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.407110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.407136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.407292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.407320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.407443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.407470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.407622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.407648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.407804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.407830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.407985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.408013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.408168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.408195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.408350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.408377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.408553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.408579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.408735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.408762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.408923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.408951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.409100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.409126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.409280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.409307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.409463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.409490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.409610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.409637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.409793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.409821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.409950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.409978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.975 qpair failed and we were unable to recover it. 00:24:58.975 [2024-07-15 13:17:20.410135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.975 [2024-07-15 13:17:20.410162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.410320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.410346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.410503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.410532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.410665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.410692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.410817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.410842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.411000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.411039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.411195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.411232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.411395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.411422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.411582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.411608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.411767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.411794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.411976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.412004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.412183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.412209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.412358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.412385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.412513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.412541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.412696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.412723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.412850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.412881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.413037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.413063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.413195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.413223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.413376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.413403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.413589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.413616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.413802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.413829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.413986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.414014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.414151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.414178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.414358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.414385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.414549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.414576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.414744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.414771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.414928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.414955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.415107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.415133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.415286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.415313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.415465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.415492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.415646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.415674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.415820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.415847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.416015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.416042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.416168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.416195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.416349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.416376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.416504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.416532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.416698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.416725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.416887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.416915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.417090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.417117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.417298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.417325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.417474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.417500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.417655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.417681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.417804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.417831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.418024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.418052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.418177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.418203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.418382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.418409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.418554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.418585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.418742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.418769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.418947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.418974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.419146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.419172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.419324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.419350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.419505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.419532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.419709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.419736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.419916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.419943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.420075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.420102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.420253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.420280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.420430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.420456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.420609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.420635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.420787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.976 [2024-07-15 13:17:20.420814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.976 qpair failed and we were unable to recover it. 00:24:58.976 [2024-07-15 13:17:20.420944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.420971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.421127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.421154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.421311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.421337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.421490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.421516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.421664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.421691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.421850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.421881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.422012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.422039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.422185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.422212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.422371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.422397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.422551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.422577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.422745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.422771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.422953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.422980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.423132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.423158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.423322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.423348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.423533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.423560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.423715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.423742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.423880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.423908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.424053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.424080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.424240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.424267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.424418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.424445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.424598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.424625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.424774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.424800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.425034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.425062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.425216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.425243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.425397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.425423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.425582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.425609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.425761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.425789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.425971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.426002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.426160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.426186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.426366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.426392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.426522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.426549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.426734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.426760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.426917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.426945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.427076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.427103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.427335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.427361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.427544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.427571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.427755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.427782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.427903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.427930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.428080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.428107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.428262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.428289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.428466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.428493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.428652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.428679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.428827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.428854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.429015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.429042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.429199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.429225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.429380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.429408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.429588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.429615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.429768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.429795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.430028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.430055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.430239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.430266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.430446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.430473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.430621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.430647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.430770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.430797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.430982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.431010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.431174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.431202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.431329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.431357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.431513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.431540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.431718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.431745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.431900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.431927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.432080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.432106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.432252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.432278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.432441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.432467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.432598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.432626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.432750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.977 [2024-07-15 13:17:20.432778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.977 qpair failed and we were unable to recover it. 00:24:58.977 [2024-07-15 13:17:20.432934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.432962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.433143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.433169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.433335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.433362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.433499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.433529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.433713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.433740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.433917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.433944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.434099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.434126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.434305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.434332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.434501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.434528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.434650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.434678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.434798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.434825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.435003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.435030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.435183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.435210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.435370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.435396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.435555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.435581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.435710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.435737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.435856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.435888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.436074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.436101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.436256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.436284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.436438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.436465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.436639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.436666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.436820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.436847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.436981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.437010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.437139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.437166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.437345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.437372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.437497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.437524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.437704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.437731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.437864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.437907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.438094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.438122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.438268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.438294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.438479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.438506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.438663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.438690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.438843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.438870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.439016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.439043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.439203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.439230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.439386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.439413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.439566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.439593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.439747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.439775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.439940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.439968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.440124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.440152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.440285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.440312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.440491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.440518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.440654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.440681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.440813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.440844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.441032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.441059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.441215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.441242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.441370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.441397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.441587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.441613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.441789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.441816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.441969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.441996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.442146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.442173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.442327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.442354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.442537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.978 [2024-07-15 13:17:20.442564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.978 qpair failed and we were unable to recover it. 00:24:58.978 [2024-07-15 13:17:20.442747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.442774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.442930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.442962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.443140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.443170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.443334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.443361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.443517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.443544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.443677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.443705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.443892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.443920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.444081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.444107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.444260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.444286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.444439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.444466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.444647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.444674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.444826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.444853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.444992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.445020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.445178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.445206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.445353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.445380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.445533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.445560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.445682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.445709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.445871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.445904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.446092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.446118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.446300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.446327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.446487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.446513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.446698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.446724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.446872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.446914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.447071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.447099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.447278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.447305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.447437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.447464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.447621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.447648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.447803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.447830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.447984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.448011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.448167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.448193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.448346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.448376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.448527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.448554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.448733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.448759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.448943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.448971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.449140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.449166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.449323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.449350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.449529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.449556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.449711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.449737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.449927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.449954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.450113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.450139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.450295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.450321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.450479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.450505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.450687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.450713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.450902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.450929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.451062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.451090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.451243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.451271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.451427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.451453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.451613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.451640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.451766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.451793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.451949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.451977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.452135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.452162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.452313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.452340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.452523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.452550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.452713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.452739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.452924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.452951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.453134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.453161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.453287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.453314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.453472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.453499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.453655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.453682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.453838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.453865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.979 qpair failed and we were unable to recover it. 00:24:58.979 [2024-07-15 13:17:20.454024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.979 [2024-07-15 13:17:20.454051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.454204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.454230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.454354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.454380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.454541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.454567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.454747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.454774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.454931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.454958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.455118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.455145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.455305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.455331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.455523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.455549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.455703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.455729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.455887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.455917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.456061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.456087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.456247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.456273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.456433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.456459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.456639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.456666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.456819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.456846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.457025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.457053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.457211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.457237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.457394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.457421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.457579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.457605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.457786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.457813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.457992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.458020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.458175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.458201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.458375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.458402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.458573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.458599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.458752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.458779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.458963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.458990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.459140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.459167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.459325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.459351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.459506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.459533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.459654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.459681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.459874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.459905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.460026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.460054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.460183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.460210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.460367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.460394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.460525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.460552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.460701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.460728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.460888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.460915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.461048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.461075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.461197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.461224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.461379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.461406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.461589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.461616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.461795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.461821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.461976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.462004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.462136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.462163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.462336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.462363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.462518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.462544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.462664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.462691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.462817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.462843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.463006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.463033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.463169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.463201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.463355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.463381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.463528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.463554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.463737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.463764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.463914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.463941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.464061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.464088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.464275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.464301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.464451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.464478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.464641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.464668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.464851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.464882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.465067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.465094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.465224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.465251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.465411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.465438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.465596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.465623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.465788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.465815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.980 [2024-07-15 13:17:20.465971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.980 [2024-07-15 13:17:20.465998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.980 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.466152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.466180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.466337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.466364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.466511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.466537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.466702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.466729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.466911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.466938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.467117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.467143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.467295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.467322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.467476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.467503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.467685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.467712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.467894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.467921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.468076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.468102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.468227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.468253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.468404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.468430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.468584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.468611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.468765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.468793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.468972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.468999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.469154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.469180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.469364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.469390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.469552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.469579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.469735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.469761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.469920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.469947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.470074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.470101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.470281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.470308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.470487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.470514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.470638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.470664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.470848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.470875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.471010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.471036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.471160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.471187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.471344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.471371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.471523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.471550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.471704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.471730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.471922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.471949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.472128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.472154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.472307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.472334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.472487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.472513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.472635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.472662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.472843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.472869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.473011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.473037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.473227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.473254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.473381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.473408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.473560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.473588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.473723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.473750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.473941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.473968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.474149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.474176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.474333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.474359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.474547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.474574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.474734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.474760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.474953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.474980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.475104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.475132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.475310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.475337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.475487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.475513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.475690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.475721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.981 [2024-07-15 13:17:20.475848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.981 [2024-07-15 13:17:20.475890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.981 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.476052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.476079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.476226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.476253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.476410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.476436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.476590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.476617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.476797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.476823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.477002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.477030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.477174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.477201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.477352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.477378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.477527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.477554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.477682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.477708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.477836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.477862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.478000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.478027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.478199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.478226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.478358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.478385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.478540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.478567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.478700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.478728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.478889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.478916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.479041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.479068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.479227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.479254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.479410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.479436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.479586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.479613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.479767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.479793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.479948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.479975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.480154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.480180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.480315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.480342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.480531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.480557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.480746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.480773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.480901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.480928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.481058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.481085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.481236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.481264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.481438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.481465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.481596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.481636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.481799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.481826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.481984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.482011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.482132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.482158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.482295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.482321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.482446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.482473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.482620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.482647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.482803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.482833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.482996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.483023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.483149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.483176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.483357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.483384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.483536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.483563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.483723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.483750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.483909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.483937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.484097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.484124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.484277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.484303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.484482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.484509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.484644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.484672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.484833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.484860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.485015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.485042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.485194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.485221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.485379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.485406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.485538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.485565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.485743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.485769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.485948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.485976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.486131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.486159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.486338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.486364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.486484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.982 [2024-07-15 13:17:20.486511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.982 qpair failed and we were unable to recover it. 00:24:58.982 [2024-07-15 13:17:20.486644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.486670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.486852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.486883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.487016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.487044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.487202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.487229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.487410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.487437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.487591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.487617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.487752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.487779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.487904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.487932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.488086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.488113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.488233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.488259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.488441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.488468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.488613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.488640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.488792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.488820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.488953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.488979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.489141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.489167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.489319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.489346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.489525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.489552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.489711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.489737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.489916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.489944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.490063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.490093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.490248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.490275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.490454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.490481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.490631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.490659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.490821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.490849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.491034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.491061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.491213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.491239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.491365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.491393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.491549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.491577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.491727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.491754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.491914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.491941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.492095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.492122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.492290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.492317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.492453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.492479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.492619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.492646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.492783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.492809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.492957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.492984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.493105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.493132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.493288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.493315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.493443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.493470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.493627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.493653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.493799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.493826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.494009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.494037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.494166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.494193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.494351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.494378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.494530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.494557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.494714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.494741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.494904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.494931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.495088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.495115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.495272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.495298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.495444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.495470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.495626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.495653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.495808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.495835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.496002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.496030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.496187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.496213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.496339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.496365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.496519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.496545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.496687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.496713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.496873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.496907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.497062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.497089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.497244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.497275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.497401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.497427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.497587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.497615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.497745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.497772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.497921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.497948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.983 qpair failed and we were unable to recover it. 00:24:58.983 [2024-07-15 13:17:20.498078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.983 [2024-07-15 13:17:20.498105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.498251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.498278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.498408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.498436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.498626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.498653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.498806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.498833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.499022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.499050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.499177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.499204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.499333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.499359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.984 qpair failed and we were unable to recover it. 00:24:58.984 [2024-07-15 13:17:20.499513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.984 [2024-07-15 13:17:20.499540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.499673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.499700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.499855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.499888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.500045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.500072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.500248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.500274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.500421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.500447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.500606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.500632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.500790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.500816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.500975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.501002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.501128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.501156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.501339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.501365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.501520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.501547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.501705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.501732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.501911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.501938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.502077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.502105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.502259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.502287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.502446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.502629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.502657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.502840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.502867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.503052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.503078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.503229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.503256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.503392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.503418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.503535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.503562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.503737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.503764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.503919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.503946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.504129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.504155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.504304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.504331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.504484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.504514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.504676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.504703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.504862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.504897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.505055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.505082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.505231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.505259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.505410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.505436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.505590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.505617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.505801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.505828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.505964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.505991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.506113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.506140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.506262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.506289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.506481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.506508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.506640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.506667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.506819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.506845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.507002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.507029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.507176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.507203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.507325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.507352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.507506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.507533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.507689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.507716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.507869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.507900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.508026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.508052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.508208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.508235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.508369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.508396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.508523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.508550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.508675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.508701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.508856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.508888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.509075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.509102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.509287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.509314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.509469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.509497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.509663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.509690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.509850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.509883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.510036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.510063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.510192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.510219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.510398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.985 [2024-07-15 13:17:20.510425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.985 qpair failed and we were unable to recover it. 00:24:58.985 [2024-07-15 13:17:20.510614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.510640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.510781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.510807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.510987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.511014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.511204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.511231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.511382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.511410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.511573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.511601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.511728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.511758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.511895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.511922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.512079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.512105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.512253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.512280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.512435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.512462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.512623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.512650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.512806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.512834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.513000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.513027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.513154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.513182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.513336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.513364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.513539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.513566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.513718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.513745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.513929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.513956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.514113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.514140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.514270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.514296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.514447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.514474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.514629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.514655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.514815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.514841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.514980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.515008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.515160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.515187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.515340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.515367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.515548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.515575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.515726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.515752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.515889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.515917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.516098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.516125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.516252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.516279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.516459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.516486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.516674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.516701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.516886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.516913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.517090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.517117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.517270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.517297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.517449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.517476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.517630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.517660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.517821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.517848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.518004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.518032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.518159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.518187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.518335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.518362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.518545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.518574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.518723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.518749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.518906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.518934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.519119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.519149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.519286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.519313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.519468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.519495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.519678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.519705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.519859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.519904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.520066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.520093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.520250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.520276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.520431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.520458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.520640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.520667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.520816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.520848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.521018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.521045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.521203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.521229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.521411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.521437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.521589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.521615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.521772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.521798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.521978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.522005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.522164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.522191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.986 [2024-07-15 13:17:20.522341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.986 [2024-07-15 13:17:20.522367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.986 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.522520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.522547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.522692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.522719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.522870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.522901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.523033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.523061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.523252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.523278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.523459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.523485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.523641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.523668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.523792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.523818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.523981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.524008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.524143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.524170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.524353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.524380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.524504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.524530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.524718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.524744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.524923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.524951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.525137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.525164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.525345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.525372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.525518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.525544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.525699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.525726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.525888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.525915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.526085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.526111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.526233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.526260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.526450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.526476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.526592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.526622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.526752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.526778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.526953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.526980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.527113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.527142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.527299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.527326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.527483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.527510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.527685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.527712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.527870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.527902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.528054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.528082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.528262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.528288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.528467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.528494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.528642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.528669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.528826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.528852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.528989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.529017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.529175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.529203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.529395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.529422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.529576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.529602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.529759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.529785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.529938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.529965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.530095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.530122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.530271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.530297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.530454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.530480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.530660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.530687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.530813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.530839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.531030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.531057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.531216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.531242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.531395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.531422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.531563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.531590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.531739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.531765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.531916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.531944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.532070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.532097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.532276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.987 [2024-07-15 13:17:20.532303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.987 qpair failed and we were unable to recover it. 00:24:58.987 [2024-07-15 13:17:20.532464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.532490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.532644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.532671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.532825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.532853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.532987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.533014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.533172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.533199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.533355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.533381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.533533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.533560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.533719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.533745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.533918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.533949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.534079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.534106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.534265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.534292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.534450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.534477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.534630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.534657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.534838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.534864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.535031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.535057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.535221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.535248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.535429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.535456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.535612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.535639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.535789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.535816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.535998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.536026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.536156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.536183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.536339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.536366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.536522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.536550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.536707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.536733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.536913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.536940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.537099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.537125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.537258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.537285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.537466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.537492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.537653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.537679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.537807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.537834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.538020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.538047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.538182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.538209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.538369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.538395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.538574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.538601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.538759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.538786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.538981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.539009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.539160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.539187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.539339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.539367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.539488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.539515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.539638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.539665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.539795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.539822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.539982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.540010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.540193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.540220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.540399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.540426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.540582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.540609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.540762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.540789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.540924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.540952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.541132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.541159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.541341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.541372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.541521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.541559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.541713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.541740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.541904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.541932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.542090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.542117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.542272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.542299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.542455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.542481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.542637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.542664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.542800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.542828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.543016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.543044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.543202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.543229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.543363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.543391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.543573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.543600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.543749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.543776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.543934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.988 [2024-07-15 13:17:20.543962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.988 qpair failed and we were unable to recover it. 00:24:58.988 [2024-07-15 13:17:20.544120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.544147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.544280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.544308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.544454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.544480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.544627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.544654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.544810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.544838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.544974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.545001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.545171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.545198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.545354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.545536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.545562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.545742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.545769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.545930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.545958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.546085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.546112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.546273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.546300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.546483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.546510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.546639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.546666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.546847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.546873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.547013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.547039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.547161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.547188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.547318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.547345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.547494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.547520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.547645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.547671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.547793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.547820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.547980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.548008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.548171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.548198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.548353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.548380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.548536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.548566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.548691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.548717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.548867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.548900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.549030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.549057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.549236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.549263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.549445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.549472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.549618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.549645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.549775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.549801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.549921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.549948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.550109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.550135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.550291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.550317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.550497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.550524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.550679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.550705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.550857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.550888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.551019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.551046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.551167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.551194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.551352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.551379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.551511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.551538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.551706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.551733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.551860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.551900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.552034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.552061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.552216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.552243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.552394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.552420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.552575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.552602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.552781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.552808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.552970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.552997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.553159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.553185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.553314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.553340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.553517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.553543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.553691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.553718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.553867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.553899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.554058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.554085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.554241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.554268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.554416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.554442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.554618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.554645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.554765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.554792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.554942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.554969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.555096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.989 [2024-07-15 13:17:20.555123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.989 qpair failed and we were unable to recover it. 00:24:58.989 [2024-07-15 13:17:20.555282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.555309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.555470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.555497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.555675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.555705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.555852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.555883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.556042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.556069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.556196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.556222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.556350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.556378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.556508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.556535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.556663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.556690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.556846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.556872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.557029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.557056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.557204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.557231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.557409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.557435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.557569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.557597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.557745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.557771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.557952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.557979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.558118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.558145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.558324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.558351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.558497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.558523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.558699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.558725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.558857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.558888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.559012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.559038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.559190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.559216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.559372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.559399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.559549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.559575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.559726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.559753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.559937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.559964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.560120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.560146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.560300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.560327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.560460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.560488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.560635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.560661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.560821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.560847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.561022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.561050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.561181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.561208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.561369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.561395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.561551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.561578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.561752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.561779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.561957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.561985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.562140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.562167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.562348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.562374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.562555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.562581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.562714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.562741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.562919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.562950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.563081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.563108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.563237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.563264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.563430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.563457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.563610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.563636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.563820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.563846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.563980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.564006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.564164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.564191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.564319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.564346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.564493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.564520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.564706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.564733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.564919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.990 [2024-07-15 13:17:20.564946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.990 qpair failed and we were unable to recover it. 00:24:58.990 [2024-07-15 13:17:20.565071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.565097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.565230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.565261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.565442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.565468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.565619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.565645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.565801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.565829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.565985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.566011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.566167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.566194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.566342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.566369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.566529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.566555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.566684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.566711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.566859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.566896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.567041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.567068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.567249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.567277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.567461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.567488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.567630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.567658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.567798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.567832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.568028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.568057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.568234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.568261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.568419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.568454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.568619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.568645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.568795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.568823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.569005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.569033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.569200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.569228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.569377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.569404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.569547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.569573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.569723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.569751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.569883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.569911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.570041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.570069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.570191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.570218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.570353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.570380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.570536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.570563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.570738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.570775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.570961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.570988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.571145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.571172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.571295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.571330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.571519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.571545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.571700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.571727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.571886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.571914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.572045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.572072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.572254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.572282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.572404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.572431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.572582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.572609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.572770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.572798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.572937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.572965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.573121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.573147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.573268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.573294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.573449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.573478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.573640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.573666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.573822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.573849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.574010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.574037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.574174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.574207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.574360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.574387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.574539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.574566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.574723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.574749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.574901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.574930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.575074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.575105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.575275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.575301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.575446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.575473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.575660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.575688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.575847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.575874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.576030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.576057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.576212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.991 [2024-07-15 13:17:20.576238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.991 qpair failed and we were unable to recover it. 00:24:58.991 [2024-07-15 13:17:20.576398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.576426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.576606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.576633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.576790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.576817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.576972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.576999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.577156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.577183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.577363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.577389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.577573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.577601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.577784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.577811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.577987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.578021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.578169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.578196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.578319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.578345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.578525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.578552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.578709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.578736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.578920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.578948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.579078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.579106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.579240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.579268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.579423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.579451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.579588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.579615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.579759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.579787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.579964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.579991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.580172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.580199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.580361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.580397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.580522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.580550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.580710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.580737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.580891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.580919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.581103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.581134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.581295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.581322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.581447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.581474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.581630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.581657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.581817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.581843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.582006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.582044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.582229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.582256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.582441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.582468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.582616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.582647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.582831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.582869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.583041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.583068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.583247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.583274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.583428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.583455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.583608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.583635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.583795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.583821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.583972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.583999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.584167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.584196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.584353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.584379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.584535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.584562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.584693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.584721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.584874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.584908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.585073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.585100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.585268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.585294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.585450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.585477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.585638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.585665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.585845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.585873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.586037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.586064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.586218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.586244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.586403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.586440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.586585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.586612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.586792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.586819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.586955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.586983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.587113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.587140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.587318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.587346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.587506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.587533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.587689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.587716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.587841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.587868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.588069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.588103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.588274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.588301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.588455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.588482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.588657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.588683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.588834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.588861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.589054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.589082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.589270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.589297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.992 [2024-07-15 13:17:20.589457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.992 [2024-07-15 13:17:20.589484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.992 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.589614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.589642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.589800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.589828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.590012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.590040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.590200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.590231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.590385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.590414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.590578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.590604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.590759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.590785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.590946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.590982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.591143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.591170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.591320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.591347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.591538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.591566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.591693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.591880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.591907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.592054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.592087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.592292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.592319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.592468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.592494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.592621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.592659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.592801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.592829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.592983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.593010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.593168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.593195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.593322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.593357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.593519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.593546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.593702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.593728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.593890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.593917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.594076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.594102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.594285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.594313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.594467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.594495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.594679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.594709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.594865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.594899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.595060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.595086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.595244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.595272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.595445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.595480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.595636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.595664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.595823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.595850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.596020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.596057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.596230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.596258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.596431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.596460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.596607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.596634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.596760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.596787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.596946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.596974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.597137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.597165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.597348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.597376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.597546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.597572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.597695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.597726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.597910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.597938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.598093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.598120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.598301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.598329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.598466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.598494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.598640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.598667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.598847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.598880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.599041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.599076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.599241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.599268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.599420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.599448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.993 [2024-07-15 13:17:20.599578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.993 [2024-07-15 13:17:20.599605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.993 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.599795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.599821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.599970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.599998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.600157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.600184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.600345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.600374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.600509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.600537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.600664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.600691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.600846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.600872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.601037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.601075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.601243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.601274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.601415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.601443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.601573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.601600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.601758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.601787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.601963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.601991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.602166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.602194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.602321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.602349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.602531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.602559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.602688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.602715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.602843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.602870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.603045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.603073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.603230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.603257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.603411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.603438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.603612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.603639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.603807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.603834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.604007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.604044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.604233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.604260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.604388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.604415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.604601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.604629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.604795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.604822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.604985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.605012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.605149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.605182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.605345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.605373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.605533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.605562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.605720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.605746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.605901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.605929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.606111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.606139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.606298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.606328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.606484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.606512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.606649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.606677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.606833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.606860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.607022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.607050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.607201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.607228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.607371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.607403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.607571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.607598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.607755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.607782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.607975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.608003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.608168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.608195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.608349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.608377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.608516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.608670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.608697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.608860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.608902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.609040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.609068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.609225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.609252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.609410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.609440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.609584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.609611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.609793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.609820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.609959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.609987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.610150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.610186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.610317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.994 [2024-07-15 13:17:20.610345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.994 qpair failed and we were unable to recover it. 00:24:58.994 [2024-07-15 13:17:20.610505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.610534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.610718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.610745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.610894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.610921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.611097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.611129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.611282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.611309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.611462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.611495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.611685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.611712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.611873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.611917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.612096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.612123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.612253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.612288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.612416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.612443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.612627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.612658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.612791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.612819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.612974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.613001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.613157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.613191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.613313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.613340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.613494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.613521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.613675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.613701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.613859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.613906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.614064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.614091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.614270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.614298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.614453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.614481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.614609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.614643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.614802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.614828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.615004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.615031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.615175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.615203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.615385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.615413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.615546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.615574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.615736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.615763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.615891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.615919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.616071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.616099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.616255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.616292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.616455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.616482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.616607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.616635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.616757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.616784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.616937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.616964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.617114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.617141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.617273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.617300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.617438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.617473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.617631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.617658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.617831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.617858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.618020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.618047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.618222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.618249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.618439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.618467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.618648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.618675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.618831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.618858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.619021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.619047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.619243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.619274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.619461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.619488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.619670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.619700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.619902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.619930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.620077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.620113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.620283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.620311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.620469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.620497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.620653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.620687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.620887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.620914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.621046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.621073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.621230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.621257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.621401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.621429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.621590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.995 [2024-07-15 13:17:20.621618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.995 qpair failed and we were unable to recover it. 00:24:58.995 [2024-07-15 13:17:20.621769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.621796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.621958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.621986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.622133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.622160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.622321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.622349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.622503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.622536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.622671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.622699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.622848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.622875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.623038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.623065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.623240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.623267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.623418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.623446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.623594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.623621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.623796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.623823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.623948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.623976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.624160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.624187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.624316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.624343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.624508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.624536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.624674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.624701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.624828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.624856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.625020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.625048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.625198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.625225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.625393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.625421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.625572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.625599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.625759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.625787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.625975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.626003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.626161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.626187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.626362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.626390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.626548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.626575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.626721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.626748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.626916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.626944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.627090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.627116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.627244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.627273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.627435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.627473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.996 [2024-07-15 13:17:20.627639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.996 [2024-07-15 13:17:20.627666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.996 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.627822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.627853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.628028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.628056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.628212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.628240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.628433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.628470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.628610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.628638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.628762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.628789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.628957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.628985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.629115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.629144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.629328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.629354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.629508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.629535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.629687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.629714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.629891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.629919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.630081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.630108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.630242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.630270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.630432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.630459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.630631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.630658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.630822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.630849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.631012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.631039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.631168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.631194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.631327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.631355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.631538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.631566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.631694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.631721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.631888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.631916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.632064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.632091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.632239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.632266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.632402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.632428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.632576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.632609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.632764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.632792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.632990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.633017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.633150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.633178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.633358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.633384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.633514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.633542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.633695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.633723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.633892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.633920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.634106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.634134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.634260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.634287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.634443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.634470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.634629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.634655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.634790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.634821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.997 qpair failed and we were unable to recover it. 00:24:58.997 [2024-07-15 13:17:20.634984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.997 [2024-07-15 13:17:20.635012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.635143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.635170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.635299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.635326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.635458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.635485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.635659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.635688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.635814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.635840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.636014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.636042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.636172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.636199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.636352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.636378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.636554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.636580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.636705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.636732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.636917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.636947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.637102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.637129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.637292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.637319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.637473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.637499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.637629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.637656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.637781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.637807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.637962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.637991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.638142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.638169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.638330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.638357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.638513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.638539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.638670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.638698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.638853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.638885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.639053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.639081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.639205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.639233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.639392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.639419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:58.998 [2024-07-15 13:17:20.639577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.998 [2024-07-15 13:17:20.639604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:58.998 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.639764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.639791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.639946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.639974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.640099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.640126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.640309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.640336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.640501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.640529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.640688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.640715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.640868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.640900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.641063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.641089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.641231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.641259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.641386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.641413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.641570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.641597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.641755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.641781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.641936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.641972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.642125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.642151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.642314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.642342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.642470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.642496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.642645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.642671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.642822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.642850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.642973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.643000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.643153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.643183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.643353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.643380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.643516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.643542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.643695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.643721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.643900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.643927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.644089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.644118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.644269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.644296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.644449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.644475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.644658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.644684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.644832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.644861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.644987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.645018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.645136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.645163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.645298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.645324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.645475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.645502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.645631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.645658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.645778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.645805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.645944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.645972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.646140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.646168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.283 [2024-07-15 13:17:20.646325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.283 [2024-07-15 13:17:20.646352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.283 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.646505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.646532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.646695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.646722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.646886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.646913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.647050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.647076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.647208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.647236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.647371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.647397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.647519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.647545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.647701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.647727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.647849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.647896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.648019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.648046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.648199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.648225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.648350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.648378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.648529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.648556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.648710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.648737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.648896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.648927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.649085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.649111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.649259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.649286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.649453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.649480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.649661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.649688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.649842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.649869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.650009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.650036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.650197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.650232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.650368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.650394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.650554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.650581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.650736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.650762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.650912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.650939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.651089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.651116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.651305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.651333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.651485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.651513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.651698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.651725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.651883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.651911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.652069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.652097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.652250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.652277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.652414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.652442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.652601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.652627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.652786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.652813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.652959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.652991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.653154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.653181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.653335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.653361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.653514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.653542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.284 qpair failed and we were unable to recover it. 00:24:59.284 [2024-07-15 13:17:20.653671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.284 [2024-07-15 13:17:20.653698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.653857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.653890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.654044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.654070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.654202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.654228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.654411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.654439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.654593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.654621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.654780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.654807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.654964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.654991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.655118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.655152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.655318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.655345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.655504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.655535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.655706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.655732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.655903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.655931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.656060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.656093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.656235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.656266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.656420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.656447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.656573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.656609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.656770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.656796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.656937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.656965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.657125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.657159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.657323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.657358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.657512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.657539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.657696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.657723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.657868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.657900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.658098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.658125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.658287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.658315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.658440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.658466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.658629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.658656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.658822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.658850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.659021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.659048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.659220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.659251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.659386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.659420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.659590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.659617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.659741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.659768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.659950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.659978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.660119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.660147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.660301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.660328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.660487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.660522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.660681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.660708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.660855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.660889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.285 [2024-07-15 13:17:20.661046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.285 [2024-07-15 13:17:20.661074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.285 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.661229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.661260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.661419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.661447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.661628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.661654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.661820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.661848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.662010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.662037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.662193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.662219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.662372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.662399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.662569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.662596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.662773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.662800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.662930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.662957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.663117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.663145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.663273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.663300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.663453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.663481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.663663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.663689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.663852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.663892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.664054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.664080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.664217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.664244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.664393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.664420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.664550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.664577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.664730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.664757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.664890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.664925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.665091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.665118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.665268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.665295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.665450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.665477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.665658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.665684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.665841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.665868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.666007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.666035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.666218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.666245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.666367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.666393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.666574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.666601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.666780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.666806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.666939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.666967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.667121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.667148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.667305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.667332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.667463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.667490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.667642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.667668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.667820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.667846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.668052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.668080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.668232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.668260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.286 [2024-07-15 13:17:20.668416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.286 [2024-07-15 13:17:20.668443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.286 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.668596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.668626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.668805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.668831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.668987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.669015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.669143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.669170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.669296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.669323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.669482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.669509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.669664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.669691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.669820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.669846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.670032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.670059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.670240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.670266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.670415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.670441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.670621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.670647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.670798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.670824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.670972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.670999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.671126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.671153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.671304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.671331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.671514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.671541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.671720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.671746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.671928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.671954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.672107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.672134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.672300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.672327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.672474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.672501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.672650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.672676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.672860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.672891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.673051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.673077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.673256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.673282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.673437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.673465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.673647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.673674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.673824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.673850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.674010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.674037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.674199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.674226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.674381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.674409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.287 qpair failed and we were unable to recover it. 00:24:59.287 [2024-07-15 13:17:20.674565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.287 [2024-07-15 13:17:20.674592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.674771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.674797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.674920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.674947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.675100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.675126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.675307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.675333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.675492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.675519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.675709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.675735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.675890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.675917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.676077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.676107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.676261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.676288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.676444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.676471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.676627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.676654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.676810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.676837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.677013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.677041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.677197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.677224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.677378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.677405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.677563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.677588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.677709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.677734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.677886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.677911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.678030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.678054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.678176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.678200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.678378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.678402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.678591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.678615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.678795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.678819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.678975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.679000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.679154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.679177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.679358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.679382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.679532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.679556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.679714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.679738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.679864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.679896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.680045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.680071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.680199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.680224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.680366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.680390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.680547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.680572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.680702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.680726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.680888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.680921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.681052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.681078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.681235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.681263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.681410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.681436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.681593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.681618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.681796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.288 [2024-07-15 13:17:20.681821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.288 qpair failed and we were unable to recover it. 00:24:59.288 [2024-07-15 13:17:20.681984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.682010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.682169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.682194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.682373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.682398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.682551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.682576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.682706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.682732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.682864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.682897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.683071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.683097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.683249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.683277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.683401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.683425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.683609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.683633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.683804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.683830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.683972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.683998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.684157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.684184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.684342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.684369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.684530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.684558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.684716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.684743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.684898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.684925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.685047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.685073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.685199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.685238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.685426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.685452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.685574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.685600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.685786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.685816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.686006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.686034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.686214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.686251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.686423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.686450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.686629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.686656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.686834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.686861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.687029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.687057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.687229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.687256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.687412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.687439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.687605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.687633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.687784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.687811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.687938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.687966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.688144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.688171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.688327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.688354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.688512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.688540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.688662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.688688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.688836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.688863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.689035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.689063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.689227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.689255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.289 [2024-07-15 13:17:20.689422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.289 [2024-07-15 13:17:20.689450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.289 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.689583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.689609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.689755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.689782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.689962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.689990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.690120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.690146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.690301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.690329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.690477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.690503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.690677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.690708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.690866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.690898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.691076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.691103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.691238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.691266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.691418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.691445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.691628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.691654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.691787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.691814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.691998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.692025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.692177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.692212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.692353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.692381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.692557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.692584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.692713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.692741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.692890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.692917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.693041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.693067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.693250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.693277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.693412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.693438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.693592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.693619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.693752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.693779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.693945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.693972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.694127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.694153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.694279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.694306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.694456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.694482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.694663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.694689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.694842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.694869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.695035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.695062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.695240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.695266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.695423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.695449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.695602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.695629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.695762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.695789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.695940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.695967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.696098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.696124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.696304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.696331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.696540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.696567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.696721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.696748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.290 qpair failed and we were unable to recover it. 00:24:59.290 [2024-07-15 13:17:20.696900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.290 [2024-07-15 13:17:20.696927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.697096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.697125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.697299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.697330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.697508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.697534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.697691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.697718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.697873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.697925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.698082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.698112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.698271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.698314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.698457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.698487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.698638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.698664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.698822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.698848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.699067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.699097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.699281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.699308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.699511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.699541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.699677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.699706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.699864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.699898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.700050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.700093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.700267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.700296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.700463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.700490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.700614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.700656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.700855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.700892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.701097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.701123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.701272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.701301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.701460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.701488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.701661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.701687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.701846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.701872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.702008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.702050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.702209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.702235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.702409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.702438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.702598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.702627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.702803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.702829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.702981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.703009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.703139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.703166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.291 qpair failed and we were unable to recover it. 00:24:59.291 [2024-07-15 13:17:20.703322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.291 [2024-07-15 13:17:20.703349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.703501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.703544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.703708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.703737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.703884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.703912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.704046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.704089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.704284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.704314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.704469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.704496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.704645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.704689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.704854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.704891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.705071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.705097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.705259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.705288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.705488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.705518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.705699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.705725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.705896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.705931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.706106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.706135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.706342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.706368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.706556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.706583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.706760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.706786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.706909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.706936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.707115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.707159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.707329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.707358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.707536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.707562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.707763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.707792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.707954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.707983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.708151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.708177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.708347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.708376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.708573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.708603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.708781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.708807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.709006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.709036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.709197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.709226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.709436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.709462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.709629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.709658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.709853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.709887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.710061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.710087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.710287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.710316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.710514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.710542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.710694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.710726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.710890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.710934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.711103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.711132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.711329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.711355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.711562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.292 [2024-07-15 13:17:20.711591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.292 qpair failed and we were unable to recover it. 00:24:59.292 [2024-07-15 13:17:20.711791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.711817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.711938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.711965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.712122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.712164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.712337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.712366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.712541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.712567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.712738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.712767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.712939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.712969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.713145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.713171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.713344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.713373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.713543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.713572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.713750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.713776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.713945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.713974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.714139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.714172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.714348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.714375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.714577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.714606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.714803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.714832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.715046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.715073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.715216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.715246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.715422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.715451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.715601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.715628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.715779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.715822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.715965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.715995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.716165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.716192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.716401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.716430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.716561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.716592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.716773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.716803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.717011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.717039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.717208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.717237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.717410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.717438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.717611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.717640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.717804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.717833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.718023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.718050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.718222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.718252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.718469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.718496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.718677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.718703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.718883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.719088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.719117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.719290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.719316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.719486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.719516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.719691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.719722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.293 [2024-07-15 13:17:20.719900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.293 [2024-07-15 13:17:20.719928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.293 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.720132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.720161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.720354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.720383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.720558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.720586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.720712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.720739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.720894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.720922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.721079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.721106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.721282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.721312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.721481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.721510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.721714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.721741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.721920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.721951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.722089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.722119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.722284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.722314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.722448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.722493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.722628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.722658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.722804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.722830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.722954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.722981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.723140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.723167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.723350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.723376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.723557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.723586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.723755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.723784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.723962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.723989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.724132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.724162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.724330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.724359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.724529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.724555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.724726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.724756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.724955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.724982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.725132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.725159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.725301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.725330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.725523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.725551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.725745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.725771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.725912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.725939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.726135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.726165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.726337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.726364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.726564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.726593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.726773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.726801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.726961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.726988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.727168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.727195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.727372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.727401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.727542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.727569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.727694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.727721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.294 [2024-07-15 13:17:20.727902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.294 [2024-07-15 13:17:20.727929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.294 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.728093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.728119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.728326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.728355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.728520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.728549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.728727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.728753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.728944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.728975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.729112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.729142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.729342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.729368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.729541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.729570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.729763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.729792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.730001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.730028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.730186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.730216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.730419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.730448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.730618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.730644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.730839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.730868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.731079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.731108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.731288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.731315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.731474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.731501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.731674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.731703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.731885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.731913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.732113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.732142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.732314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.732344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.732518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.732547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.732715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.732746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.732911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.732942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.733121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.733148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.733352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.733381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.733577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.733607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.733788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.733815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.734016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.734046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.734188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.734217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.734385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.734412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.734576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.734605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.734733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.734762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.734948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.734975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.735151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.735180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.735375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.735401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.735584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.295 [2024-07-15 13:17:20.735610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.295 qpair failed and we were unable to recover it. 00:24:59.295 [2024-07-15 13:17:20.735787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.735817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.735987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.736017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.736218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.736244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.736417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.736446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.736651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.736678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.736857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.736892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.737066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.737095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.737261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.737291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.737467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.737493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.737692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.737722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.737864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.737903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.738058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.738084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.738235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.738261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.738468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.738502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.738675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.738701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.738866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.738906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.739103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.739132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.739307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.739333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.739456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.739500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.739670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.739699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.739895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.739923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.740089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.740118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.740320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.740346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.740474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.740501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.740671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.740700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.740900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.740928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.741111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.741137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.741322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.741352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.741556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.741583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.741762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.741789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.741938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.741969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.742143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.742172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.742339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.742366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.742497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.742540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.742705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.742734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.742916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.742943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.743148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.743177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.296 [2024-07-15 13:17:20.743392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.296 [2024-07-15 13:17:20.743418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.296 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.743571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.743598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.743753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.743780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.743940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.743968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.744118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.744286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.744315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.744488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.744517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.744665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.744692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.744875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.744907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.745094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.745123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.745282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.745308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.745455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.745482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.745662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.745692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.745867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.745900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.746059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.746085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.746294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.746323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.746505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.746536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.746716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.746743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.746907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.746937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.747138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.747165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.747367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.747396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.747597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.747626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.747768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.747795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.747950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.747995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.748185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.748215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.748357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.748385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.748535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.748578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.748751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.748780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.748970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.748997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.749174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.749203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.749383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.749412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.749591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.749617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.749790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.749819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.750020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.750047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.750179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.750206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.750371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.750400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.750573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.750602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.750778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.750805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.751008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.751038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.751206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.751236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.751411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.297 [2024-07-15 13:17:20.751438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.297 qpair failed and we were unable to recover it. 00:24:59.297 [2024-07-15 13:17:20.751609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.751638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.751780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.751811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.752011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.752040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.752224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.752254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.752417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.752446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.752621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.752647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.752810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.752840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.753051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.753081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.753256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.753282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.753439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.753466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.753637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.753666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.753820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.753847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.754030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.754058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.754235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.754265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.754406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.754433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.754627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.754661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.754838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.754868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.755084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.755111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.755284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.755313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.755478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.755509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.755715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.755742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.755890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.755921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.756116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.756146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.756325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.756352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.756553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.756583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.756754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.756783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.756960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.756987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.757151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.757181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.757319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.757349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.757499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.757526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.757675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.757718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.757914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.757944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.758129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.758156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.758330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.758360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.758555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.758584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.758740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.758769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.758962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.758989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.759146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.759172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.759328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.759354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.759512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.759539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.298 qpair failed and we were unable to recover it. 00:24:59.298 [2024-07-15 13:17:20.759723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.298 [2024-07-15 13:17:20.759753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.759919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.759947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.760112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.760146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.760316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.760345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.760497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.760524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.760697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.760727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.760902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.760930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.761085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.761112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.761316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.761346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.761507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.761536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.761736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.761763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.761936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.761966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.762112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.762141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.762312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.762339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.762515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.762545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.762816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.762849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.763027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.763055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.763225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.763254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.763426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.763456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.763610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.763637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.763792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.763837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.764024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.764051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.764177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.764204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.764375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.764404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.764577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.764607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.764775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.764802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.764960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.764991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.765187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.765217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.765364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.765391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.765549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.765576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.765780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.765810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.765972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.766000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.766130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.766157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.766363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.766390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.766567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.766594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.766725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.766752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.766910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.766953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.767105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.767132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.767330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.767359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.767536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.767566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.767720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.299 [2024-07-15 13:17:20.767746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.299 qpair failed and we were unable to recover it. 00:24:59.299 [2024-07-15 13:17:20.767887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.767931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.768074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.768108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.768282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.768309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.768477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.768507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.768657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.768686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.768871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.768916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.769069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.769099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.769240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.769271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.769454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.769481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.769683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.769714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.769901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.769943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.770089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.770116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.770314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.770343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.770502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.770534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.770707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.770733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.770862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.770896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.771036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.771062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.771196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.771222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.771379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.771404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.771566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.771593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.771748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.771774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.771931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.771957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.772086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.772112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.772293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.772318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.772501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.772526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.772706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.772732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.772895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.772921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.773077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.773103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.773269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.773295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.773486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.773511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.773641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.773667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.773825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.773851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.774007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.774033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.774157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.774183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.300 [2024-07-15 13:17:20.774319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.300 [2024-07-15 13:17:20.774345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.300 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.774510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.774536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.774717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.774742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.774866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.774901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.775063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.775089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.775254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.775279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.775433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.775460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.775595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.775625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.775809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.775834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.775982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.776008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.776145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.776172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.776329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.776354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.776513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.776539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.776677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.776704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.776864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.776910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.777036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.777063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.777202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.777227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.777352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.777377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.777558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.777584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.777714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.777739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.777919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.777945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.778093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.778118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.778306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.778331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.778486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.778511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.778676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.778702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.778824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.778851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.779000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.779026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.779150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.779175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.779335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.779361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.779480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.779506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.779692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.779718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.779841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.779867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.780015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.780041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.780226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.780252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.780387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.780413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.780597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.780622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.780761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.780786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.780925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.780951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.781083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.781108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.781287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.781312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.781466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.781491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.301 qpair failed and we were unable to recover it. 00:24:59.301 [2024-07-15 13:17:20.781664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-15 13:17:20.781689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.781817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.781842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.781998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.782025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.782162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.782187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.782312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.782338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.782522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.782548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.782732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.782762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.782942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.782969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.783128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.783155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.783310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.783336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.783492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.783517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.783706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.783730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.783893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.783920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.784050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.784076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.784236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.784262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.784391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.784416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.784545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.784570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.784718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.784743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.784936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.784962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.785090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.785117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.785287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.785313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.785492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.785519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.785672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.785699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.785893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.785920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.786043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.786068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.786204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.786229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.786417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.786442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.786567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.786593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.786745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.786771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.786946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.786972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.787097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.787122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.787290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.787315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.787474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.787500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.787694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.787720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.787884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.787910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.788028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.788053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.788180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.788206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.788370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.788396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.788553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.788590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.788747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.788772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.788909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-15 13:17:20.788936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.302 qpair failed and we were unable to recover it. 00:24:59.302 [2024-07-15 13:17:20.789098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.789124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.789282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.789307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.789485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.789511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.789671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.789697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.789852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.789882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.790060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.790089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.790246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.790272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.790420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.790446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.790619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.790644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.790802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.790828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.790954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.790980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.791132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.791166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.791327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.791352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.791485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.791512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.791665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.791691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.791874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.791907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.792034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.792061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.792200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.792226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.792353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.792379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.792544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.792570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.792755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.792780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.792907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.792934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.793093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.793118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.793281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.793307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.793464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.793490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.793670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.793695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.793856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.793898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.794041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.794067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.794197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.794225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.794384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.794410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.794537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.794562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.794739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.794765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.794901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.794938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.795065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.795090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.795242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.795267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.795396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.795421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.795545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.795570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.795735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.795760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.795892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.795918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.796080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.796106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.303 [2024-07-15 13:17:20.796274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-15 13:17:20.796299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.303 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.796468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.796493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.796646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.796671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.796821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.796847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.797001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.797027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.797182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.797213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.797341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.797366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.797486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.797512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.797664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.797689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.797821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.797847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.797999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.798026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.798191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.798217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.798381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.798406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.798537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.798562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.798703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.798728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.798912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.798948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.799081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.799106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.799275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.799315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.799495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.799541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.799737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.799766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.799950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.799978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.800132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.800165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.800352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.800378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.800535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.800560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.800714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.800738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.800900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.800932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.801090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.801120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.801292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.801321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.801479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.801520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.801706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.801731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.801911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.801945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.802100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.802137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.802318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.802346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.802513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.802541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.802705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.802733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.304 [2024-07-15 13:17:20.802889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.304 [2024-07-15 13:17:20.802915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.304 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.803041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.803068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.803258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.803287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.803476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.803504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.803644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.803674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.803862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.803897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.804022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.804048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.804180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.804205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.804419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.804448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.804606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.804635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.804832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.804865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.805035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.805061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.805201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.805226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.805400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.805430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.805591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.805619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.805850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.805889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.806099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.806135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.806359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.806399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.806598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.806626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.806787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.806820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.806978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.807005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.807147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.807173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.807369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.807397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.807564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.807593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.807767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.807796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.807946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.807973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.808181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.808210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.808384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.808409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.808577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.808605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.808740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.808768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.808919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.808946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.809081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.809107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.809273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.809298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.809449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.809479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.809644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.809672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.809827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.809853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.810035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.810074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.810260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.810288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.810469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.810513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.810674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.810718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.810901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.810936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.305 qpair failed and we were unable to recover it. 00:24:59.305 [2024-07-15 13:17:20.811107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.305 [2024-07-15 13:17:20.811132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.811316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.811358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.811540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.811582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.811746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.811772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.811935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.811961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.812097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.812123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.812311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.812354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.812528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.812572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.812726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.812751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.812925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.812962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.813162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.813204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.813386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.813432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.813640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.813683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.813816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.813842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.814028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.814073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.814287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.814328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.814486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.814529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.814713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.814739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.814913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.814944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.815181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.815224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.815405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.815449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.815612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.815638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.815789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.815814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.815998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.816040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.816224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.816271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.816450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.816494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.816673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.816699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.816855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.816884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.817075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.817117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.817268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.817311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.818175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.818205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.818425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.818470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.818630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.818674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.818836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.818862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.819080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.819125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.819293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.819321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.819503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.819545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.819700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.819725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.819857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.819889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.820019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.820045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.820228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.306 [2024-07-15 13:17:20.820255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.306 qpair failed and we were unable to recover it. 00:24:59.306 [2024-07-15 13:17:20.820413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.820442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.820582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.820606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.820761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.820786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.820963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.821007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.821160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.821204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.821354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.821398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.821533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.821559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.821750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.821776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.821935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.821966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.822120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.822146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.822271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.822296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.822442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.822467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.822627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.822652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.822788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.822814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.822968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.823011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.823156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.823200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.823409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.823451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.823606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.823631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.823788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.823813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.824008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.824051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.824201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.824245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.824396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.824440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.824605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.824630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.825359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.825388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.825576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.825620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.825788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.825814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.825988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.826031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.826211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.826254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.826464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.826492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.826667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.826692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.826812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.826836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.826974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.827001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.827128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.827153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.827328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.827372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.827513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.827540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.827694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.827734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.827883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.827911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.828046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.828072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.828221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.828250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.828418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.828458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.307 [2024-07-15 13:17:20.828614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.307 [2024-07-15 13:17:20.828642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.307 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.828805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.828830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.828961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.828989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.829112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.829139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.829303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.829345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.829528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.829557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.829744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.829769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.829900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.829937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.830058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.830089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.830274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.830304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.830479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.830508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.830684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.830710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.830866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.830908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.831076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.831102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.831243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.831285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.831456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.831485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.831663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.831692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.831893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.831942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.832104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.832130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.832295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.832321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.832480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.832522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.832709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.832738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.832908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.832941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.833070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.833097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.833282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.833311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.833509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.833538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.833709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.833745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.833914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.833958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.834140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.834179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.834338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.834365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.834539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.834567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.834805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.834831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.834979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.835007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.835167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.835193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.835325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.835350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.308 qpair failed and we were unable to recover it. 00:24:59.308 [2024-07-15 13:17:20.835514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.308 [2024-07-15 13:17:20.835539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.835711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.835754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.835914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.835941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.836076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.836102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.836282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.836310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.836497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.836540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.836715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.836741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.836933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.836960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.837117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.837143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.837325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.837350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.837508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.837534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.837686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.837712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.837896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.837923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.838059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.838088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.838260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.838287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.838467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.838497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.838674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.838699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.838882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.838908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.839037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.839063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.839247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.839290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.839495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.839524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.839719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.839744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.839907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.839933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.840089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.840114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.840326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.840355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.840497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.840524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.840677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.840702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.840838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.840863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.841024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.841050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.841185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.841210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.841336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.841360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.841510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.841535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.841659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.841684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.841866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.841900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.842036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.842062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.842193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.842218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.842400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.842426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.842587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.842613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.842768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.842793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.842923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.842951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.309 [2024-07-15 13:17:20.843119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.309 [2024-07-15 13:17:20.843145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.309 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.843324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.843367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.843577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.843620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.843752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.843779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.843940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.843966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.844089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.844115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.844304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.844346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.844534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.844560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.844696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.844721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.844902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.844928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.845055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.845081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.845257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.845299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.845518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.845560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.845721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.845750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.845911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.845938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.846076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.846101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.846230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.846255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.846411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.846436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.846568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.846595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.846777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.846803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.846928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.846954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.847114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.847139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.847285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.847328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.847541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.847584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.847746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.847772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.847920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.847946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.848076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.848101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.848280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.848324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.848539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.848580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.848763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.848789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.848941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.848984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.849167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.849211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.849363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.849390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.849528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.849554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.849735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.849761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.849889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.849915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.850090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.850134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.850285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.850328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.850499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.850541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.850697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.850732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.850942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.850986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.310 qpair failed and we were unable to recover it. 00:24:59.310 [2024-07-15 13:17:20.851133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.310 [2024-07-15 13:17:20.851163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.851359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.851385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.851517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.851543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.851668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.851695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.851863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.851899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.852093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.852118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.852297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.852326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.852493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.852536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.852693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.852720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.852871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.852922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.853134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.853177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.853353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.853395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.853605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.853653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.853810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.853836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.854020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.854065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.854275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.854320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.854496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.854541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.854698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.854725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.854910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.854936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.855099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.855125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.855268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.855311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.855525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.855567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.855727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.855752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.855905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.855932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.856089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.856133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.856339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.856383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.856593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.856638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.856763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.856789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.856945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.856990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.857198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.857241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.857430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.857459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.857636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.857661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.857809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.857835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.858046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.858090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.858278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.858321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.858461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.858503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.858661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.858687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.858843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.858869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.859099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.859142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.859347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.859393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.859655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.311 [2024-07-15 13:17:20.859685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.311 qpair failed and we were unable to recover it. 00:24:59.311 [2024-07-15 13:17:20.859948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.859975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.860184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.860210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.860368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.860393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.860525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.860550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.860704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.860729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.860888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.860915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.861075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.861100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.861245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.861293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.861504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.861547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.861726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.861773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.861929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.861955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.862138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.862186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.862369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.862395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.862598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.862641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.862825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.862851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.863029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.863055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.863187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.863213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.863365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.863408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.863584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.863628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.863818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.863844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.864063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.864107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.864293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.864336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.864510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.864552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.864739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.864765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.864967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.865011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.865162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.865205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.865412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.865454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.865608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.865651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.865807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.865834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.866004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.866048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.866256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.866298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.866427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.866454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.866662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.866705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.866863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.866903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.867101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.867127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.867287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.867443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.867469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.867642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.867685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.867845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.867875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.868038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.868080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.312 qpair failed and we were unable to recover it. 00:24:59.312 [2024-07-15 13:17:20.868256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.312 [2024-07-15 13:17:20.868299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.868483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.868527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.868715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.868742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.868872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.868905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.869068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.869094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.869272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.869300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.869469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.869497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.869652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.869694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.869895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.869938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.870090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.870116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.870259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.870287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.870460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.870489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.870666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.870694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.870890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.870918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.871061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.871086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.871243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.871268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.871401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.871426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.871575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.871601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.871764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.871789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.871926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.871956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.872104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.872130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.872386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.872414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.872606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.872634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.872805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.872833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.873017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.873042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.873222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.873256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.873393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.873422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.873587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.873615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.873782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.873810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.873983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.313 [2024-07-15 13:17:20.874009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.313 qpair failed and we were unable to recover it. 00:24:59.313 [2024-07-15 13:17:20.874165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.874190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.874349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.874374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.874551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.874581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.874727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.874753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.874910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.874936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.875056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.875082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.875238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.875263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.875445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.875470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.875621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.875664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.875844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.875869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.876030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.876055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.876209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.876250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.876420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.876447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.876610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.876638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.876810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.876837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.876989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.877015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.877165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.877190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.877364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.877391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.877555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.877583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.877780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.877809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.877983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.878009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.878129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.878155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.878293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.878337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.878509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.878537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.878708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.878736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.878923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.878959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.879140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.879165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.879323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.879348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.879528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.879555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.879746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.879773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.879938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.879964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.880145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.880186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.880352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.880380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.880566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.880593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.880738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.880766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.880927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.880953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.881122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.881162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.881311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.881356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.881527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.881570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.881754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.314 [2024-07-15 13:17:20.881798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.314 qpair failed and we were unable to recover it. 00:24:59.314 [2024-07-15 13:17:20.881963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.881990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.882195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.882238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.882444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.882487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.882684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.882710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.882865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.882909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.883070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.883096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.883327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.883353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.883487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.883512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.883669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.883694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.883850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.883890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.884086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.884112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.884243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.884268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.884427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.884452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.884575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.884601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.884778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.884804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.884983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.885027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.885211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.885255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.885396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.885424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.885620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.885661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.885824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.885850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.886008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.886051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.886245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.886289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.886440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.886483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.886618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.886645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.886831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.886857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.887022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.887065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.887276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.887320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.887497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.887542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.887680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.887715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.887874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.887910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.888694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.888730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.888926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.888972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.889102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.315 [2024-07-15 13:17:20.889128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.315 qpair failed and we were unable to recover it. 00:24:59.315 [2024-07-15 13:17:20.889289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.889315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.889470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.889513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.889648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.889674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.889836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.889862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.890004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.890030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.890201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.890228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.890410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.890454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.890647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.890673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.890830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.890856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.890994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.891021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.891180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.891205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.891405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.891446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.891611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.891638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.891852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.891886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.892059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.892087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.892236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.892264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.892444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.892472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.892650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.892694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.892855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.892889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.893018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.893044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.893220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.893264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.893420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.893463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.893642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.893685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.893841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.893868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.894026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.894052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.894232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.894260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.894437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.894465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.894644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.894670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.894806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.894831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.894972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.894998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.895189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.895217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.895383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.895412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.895605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.895633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.895841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.895866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.896034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.896059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.896213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.896238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.896399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.896425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.896571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.896597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.896748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.316 [2024-07-15 13:17:20.896776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.316 qpair failed and we were unable to recover it. 00:24:59.316 [2024-07-15 13:17:20.896960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.896986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.897143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.897169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.897349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.897374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.897547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.897576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.897737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.897765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.897945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.897972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.898093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.898119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.898288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.898315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.898478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.898503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.898654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.898679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.898887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.898913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.899041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.899066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.899199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.899242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.899436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.899464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.899628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.899656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.899831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.899859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.900016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.900042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.900164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.900189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.900397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.900426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.900588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.900616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.900779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.900806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.900960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.900985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.901136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.901176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.901356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.901398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.901539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.901569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.901742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.901770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.901949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.901975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.902113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.902138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.902285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.902313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.902501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.902529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.902669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.902698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.902866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.902901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.903072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.903098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.903266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.903294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.317 qpair failed and we were unable to recover it. 00:24:59.317 [2024-07-15 13:17:20.903425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.317 [2024-07-15 13:17:20.903452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.903621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.903650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.903849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.903875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.904010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.904035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.904159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.904185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.904345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.904389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.904593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.904622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.904803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.904829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.904968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.904994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.905144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.905186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.905327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.905352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.905551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.905583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.905766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.905791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.905944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.905970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.906104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.906130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.906307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.906333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.906488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.906513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.906684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.906712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.906888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.906917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.907067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.907092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.907248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.907273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.907426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.907451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.907636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.907664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.907812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.907838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.907976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.908002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.908153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.908178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.908302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.908328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.908454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.908494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.908657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.908833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.908858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.909011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.909036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.318 qpair failed and we were unable to recover it. 00:24:59.318 [2024-07-15 13:17:20.909165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.318 [2024-07-15 13:17:20.909190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.909306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.909332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.909464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.909490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.909639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.909667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.909835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.909860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.910021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.910046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.910194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.910222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.910399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.910428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.910549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.910591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.910748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.910776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.910940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.910965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.911095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.911120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.911322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.911350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.911520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.911549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.911747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.911775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.911944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.911969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.912096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.912121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.912299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.912343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.912503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.912531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.912732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.912760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.912962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.912988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.913153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.913179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.913333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.913358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.913502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.913530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.913744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.913769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.913893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.913918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.914056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.914081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.914202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.914228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.914384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.914410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.914567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.914593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.914773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.914801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.914983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.915024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.319 [2024-07-15 13:17:20.915196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.319 [2024-07-15 13:17:20.915223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.319 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.915397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.915441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.915593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.915641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.915803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.915829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.915963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.915989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.916145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.916187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.916368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.916416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.916590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.916637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.916814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.916840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.917033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.917077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.917253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.917295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.917453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.917483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.917683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.917712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.917863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.917894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.918043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.918071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.918245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.918273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.918446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.918474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.918609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.918637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.918839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.918867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.919050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.919076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.919256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.919284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.919427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.919455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.919650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.919678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.919854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.919895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.920026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.920051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.920201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.920226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.920348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.920373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.920590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.920618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.920761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.920788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.920944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.920974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.921167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.921196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.921361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.921389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.921556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.921581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.921732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.921758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.921941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.921966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.922096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.922121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.922244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.922287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.922458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.922485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.320 [2024-07-15 13:17:20.922675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.320 [2024-07-15 13:17:20.922703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.320 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.922902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.922949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.923105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.923131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.923259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.923284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.923436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.923461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.923621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.923661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.923827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.923855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.924010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.924035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.924165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.924190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.924358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.924386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.924586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.924615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.924806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.924834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.925008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.925034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.925185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.925212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.925407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.925435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.925618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.925659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.925855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.925890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.926044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.926069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.926193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.926218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.926392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.926420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.926646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.926675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.926882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.926907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.927047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.927072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.927213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.927241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.927413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.927441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.927634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.927662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.927823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.927851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.928009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.928035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.928169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.928195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.928346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.928374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.928536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.928564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.928708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.928736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.928922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.928948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.929079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.929104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.929272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.929300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.929475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.929500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.929664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.321 [2024-07-15 13:17:20.929689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.321 qpair failed and we were unable to recover it. 00:24:59.321 [2024-07-15 13:17:20.929844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.929869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.930012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.930038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.930164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.930189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.930309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.930350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.930517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.930545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.930707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.930735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.930884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.930912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.931058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.931083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.931211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.931238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.931428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.931453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.931621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.931649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.931844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.931869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.932063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.932088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.932270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.932295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.932505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.932530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.932703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.932731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.932938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.932964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.933081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.933106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.933269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.933296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.933490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.933518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.933681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.933709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.933893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.933946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.934128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.934157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.934313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.934340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.934545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.934573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.934747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.934775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.934926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.934951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.935080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.935105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.935314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.935342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.935582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.935610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.935755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.935783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.935966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.935998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.936143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.936168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.936308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.936336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.936503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.322 [2024-07-15 13:17:20.936531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.322 qpair failed and we were unable to recover it. 00:24:59.322 [2024-07-15 13:17:20.936698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.936726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.936940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.936966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.937097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.937122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.937290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.937315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.937487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.937514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.937680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.937708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.937861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.937893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.938059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.938084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.938256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.938283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.938462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.938487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.938662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.938689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.938856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.938890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.939074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.939100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.939221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.939246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.939412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.939444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.939687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.939715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.939933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.939958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.940095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.940121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.940312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.940337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.940487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.940512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.940695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.940720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.940844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.940869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.941045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.941070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.941215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.941241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.941390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.941415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.941583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.941610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.941778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.941805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.941961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.941986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.942152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.942177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.942307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.942332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.942485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.942510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.942663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.942688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.942813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.942838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.942974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.942999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.943147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.943188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.943333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.943361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.943543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.943569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.943717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.943742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.943943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.943972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.944125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.944150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.944356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.944394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.944566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.944595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.944713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.944738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.944893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.944919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.945054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.323 [2024-07-15 13:17:20.945095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.323 qpair failed and we were unable to recover it. 00:24:59.323 [2024-07-15 13:17:20.945273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.945298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.945447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.945472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.945626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.945654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.945803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.945828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.946000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.946028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.946173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.946202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.946403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.946429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.946627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.946655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.946820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.946848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.947012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.947038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.947173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.947198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.947355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.947396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.947577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.947602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.947805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.947833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.948004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.948032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.948201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.948226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.948402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.948430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.948576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.948604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.948770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.948795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.948932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.948957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.949082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.949107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.949262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.949287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.949436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.949462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.949625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.949653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.949812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.949837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.949995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.950021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.950209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.950249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.950443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.950468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.950644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.950672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.950829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.950856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.951064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.951089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.951220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.951245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.951389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.951414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.951537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.951562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.951685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.951728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.951892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.951925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.952084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.952109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.952269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.952299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.952469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.952497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.952694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.952719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.952859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.952896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.953068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.953096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.953271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.953297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.324 [2024-07-15 13:17:20.953423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.324 [2024-07-15 13:17:20.953463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.324 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.953634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.953662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.953856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.953888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.954080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.954108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.954294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.954319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.954449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.954475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.954657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.954682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.954839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.954867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.955062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.955087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.955219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.955244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.955420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.955463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.955666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.955691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.955833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.955862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.956009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.956037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.956242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.956267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.956447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.956475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.956653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.956679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.956833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.956857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.957057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.957085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.957231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.957259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.957438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.957463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.957673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.957705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.957873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.325 [2024-07-15 13:17:20.957910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.325 qpair failed and we were unable to recover it. 00:24:59.325 [2024-07-15 13:17:20.958109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.958135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.958349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.958396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.958585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.958613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.958754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.958781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.958962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.958989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.959145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.959172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.959356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.959382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.959537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.959563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.959745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.959771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.609 [2024-07-15 13:17:20.959931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-15 13:17:20.959958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.609 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.960116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.960151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.960279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.960304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.960451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.960477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.960629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.960655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.960810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.960835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.961002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.961029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.961159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.961184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.961345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.961371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.961531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.961557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.961681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.961707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.961856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.961888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.962059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.962085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.962277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.962303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.962460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.962486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.962642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.962668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.962852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.962884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.963025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.963052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.963239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.963266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.963421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.963447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.963599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.963624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.963759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.963786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.963965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.964006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.964137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.964163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.964319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.964345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.964468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.964493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.964626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.964651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.964832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.964858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.965014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.965040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.965196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.965227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.965381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.965407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.965564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.965589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.965723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.965750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.965911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.966092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.966117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.966312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.966338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.966518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.966544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.966725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.966750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.966906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.966933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.967091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.610 [2024-07-15 13:17:20.967116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.610 qpair failed and we were unable to recover it. 00:24:59.610 [2024-07-15 13:17:20.967283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.967310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.967464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.967489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.967644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.967670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.967839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.967887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.968033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.968061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.968223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.968249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.968370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.968396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.968578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.968604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.968775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.968803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.968966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.968992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.969173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.969198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.969354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.969379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.969538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.969564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.969695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.969720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.969874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.969907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.970064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.970090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.970244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.970269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.970457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.970483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.970630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.970655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.970839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.970864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.970998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.971024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.971204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.971230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.971358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.971383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.971535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.971561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.971711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.971736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.971890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.971916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.972044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.972069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.972222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.972247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.972396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.972421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.972545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.972571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.972718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.972759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.972922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.972950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.973084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.973111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.973293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.973319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.973444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.973469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.973601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.973628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.973753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.973779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.973962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.973988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.974127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.974152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.974308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.974334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.611 [2024-07-15 13:17:20.974483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.611 [2024-07-15 13:17:20.974509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.611 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.974669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.974694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.974871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.974905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.975068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.975093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.975254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.975279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.975458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.975483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.975616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.975643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.975802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.975828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.976010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.976050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.976183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.976210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.976398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.976425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.976596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.976621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.976753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.976779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.976979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.977005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.977158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.977184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.977342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.977369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.977526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.977552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.977726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.977752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.977932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.977958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.978093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.978119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.978298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.978324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.978449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.978475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.978632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.978659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.978793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.978819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.978973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.978999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.979153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.979179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.979361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.979387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.979518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.979544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.979720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.979746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.979929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.979955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.980125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.980155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.980344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.980370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.980527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.980552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.980736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.980761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.980944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.980970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.981117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.981142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.981296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.981321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.981453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.981479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.981639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.981664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.981818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.981844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.982009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.612 [2024-07-15 13:17:20.982034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.612 qpair failed and we were unable to recover it. 00:24:59.612 [2024-07-15 13:17:20.982214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.982240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.982361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.982387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.982547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.982572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.982732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.982758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.982943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.982969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.983101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.983128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.983317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.983343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.983472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.983497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.983686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.983712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.983895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.983922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.984101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.984126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.984313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.984339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.984462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.984489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.984640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.984666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.984851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.984882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.985067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.985093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.985251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.985277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.985461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.985487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.985668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.985694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.985849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.985874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.986069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.986095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.986227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.986252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.986406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.986432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.986586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.986612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.986742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.986767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.986947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.986973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.987134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.987160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.987314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.987340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.987493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.987518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.987669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.987699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.987856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.987890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.988053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.988079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.988208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.988235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.988394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.988420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.988580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.988606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.988787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.988813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.988968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.988994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.989175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.989200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.989359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.989384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.613 [2024-07-15 13:17:20.989538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.613 [2024-07-15 13:17:20.989565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.613 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.989711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.989737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.989919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.989945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.990105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.990131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.990292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.990317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.990500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.990526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.990708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.990733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.990908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.990949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.991087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.991114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.991267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.991293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.991450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.991476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.991631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.991656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.991786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.991813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.991976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.992003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.992159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.992184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.992368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.992394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.992546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.992571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.992772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.992827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.992999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.993028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.993193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.993220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.993400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.993444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.993622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.993666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.993828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.993854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.994054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.994082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.994234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.994264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.994549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.994602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.994767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.994797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.994981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.995007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.995177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.995204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.995371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.995400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.995595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.995628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.995842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.995868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.996053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.996078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.996254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.996282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.614 qpair failed and we were unable to recover it. 00:24:59.614 [2024-07-15 13:17:20.996594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.614 [2024-07-15 13:17:20.996652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.996789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.996818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.996998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.997024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.997176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.997204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.997409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.997434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.997581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.997610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.997781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.997809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.998009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.998035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.998183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.998212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.998466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.998517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.998716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.998744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.998950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.998976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.999196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.999251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.999425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.999451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.999583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.999609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.999796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:20.999821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:20.999984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.000012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.000216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.000245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.000489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.000542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.000702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.000743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.000930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.000956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.001116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.001141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.001356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.001381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.001559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.001588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.001754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.001782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.001958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.001984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.002181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.002209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.002539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.002595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.002764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.002792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.002970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.002995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.003152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.003197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.003376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.003401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.003550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.003578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.003781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.003810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.003989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.004015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.004189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.004217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.004507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.004566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.004767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.004795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.004998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.005024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.005238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.005288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.005489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.615 [2024-07-15 13:17:21.005514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.615 qpair failed and we were unable to recover it. 00:24:59.615 [2024-07-15 13:17:21.005689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.005718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.005893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.005938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.006095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.006120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.006254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.006280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.006434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.006460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.006630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.006660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.006833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.006861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.007062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.007088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.007270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.007296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.007469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.007498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.007665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.007692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.007835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.007861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.008001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.008045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.008247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.008275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.008423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.008448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.008607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.008632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.008808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.008836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.009009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.009035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.009214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.009257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.009428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.009455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.009667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.009692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.009824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.009849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.010010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.010068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.010254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.010282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.010441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.010467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.010647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.010674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.010803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.010831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.010973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.011003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.011151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.011177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.011332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.011358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.011482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.011508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.011667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.011693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.011854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.011887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.012024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.012050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.012193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.012219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.012351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.012383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.012569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.012595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.012728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.012755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.012926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.012954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.013110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.013136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.616 [2024-07-15 13:17:21.013262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.616 [2024-07-15 13:17:21.013289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.616 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.013442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.013468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.013592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.013619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.013774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.013799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.013982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.014016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.014168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.014193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.014348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.014374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.014554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.014579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.014730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.014756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.014958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.015005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.015173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.015200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.015356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.015566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.015592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.015789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.015815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.015945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.015971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.016095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.016120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.016278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.016305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.016463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.016490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.016648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.016674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.016834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.016859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.017051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.017078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.017230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.017256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.017446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.017471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.017628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.017654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.017842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.017868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.018015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.018042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.018197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.018224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.018379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.018405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.018568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.018595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.018752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.018778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.018937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.018963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.019117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.617 [2024-07-15 13:17:21.019143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.617 qpair failed and we were unable to recover it. 00:24:59.617 [2024-07-15 13:17:21.019271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.019298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.019453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.019479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.019661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.019687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.019817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.019847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.020013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.020039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.020194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.020220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.020369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.020395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.020558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.020584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.020747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.020773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.020926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.020953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.021107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.021134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.021321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.021346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.021482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.021663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.021692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.021822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.021848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.022013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.022039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.022163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.022189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.022325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.022351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.022504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.022529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.022659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.022685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.022842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.022868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.023032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.023058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.023208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.023234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.023356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.023381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.023561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.023586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.023723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.023751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.023921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.023948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.024133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.024158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.024315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.024341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.024527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.024552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.024719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.024745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.024897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.024925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.025086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.025111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.025239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.025264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.618 qpair failed and we were unable to recover it. 00:24:59.618 [2024-07-15 13:17:21.025419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.618 [2024-07-15 13:17:21.025444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.025611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.025636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.025769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.025796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.025950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.025976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.026102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.026127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.026294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.026319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.026501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.026526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.026691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.026716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.026896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.026930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.027055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.027087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.027257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.027282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.027443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.027468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.027625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.027650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.027848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.027873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.028012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.028037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.028162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.028187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.028348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.028374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.028554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.028579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.028728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.028753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.028910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.028936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.029096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.029121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.029276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.029301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.029454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.029480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.029640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.029665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.029859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.029905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.030083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.030110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.030267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.030293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.030474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.030500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.030627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.030652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.030837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.030863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.031023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.031049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.031233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.031258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.031387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.031412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.031546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.031570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.031756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.619 [2024-07-15 13:17:21.031781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.619 qpair failed and we were unable to recover it. 00:24:59.619 [2024-07-15 13:17:21.031904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.031931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.032108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.032147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.032339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.032366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.032497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.032524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.032656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.032682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.032873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.032909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.033069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.033095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.033252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.033278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.033403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.033428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.033584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.033609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.033739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.033765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.033939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.033965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.034120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.034146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.034283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.034311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.034462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.034494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.034620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.034647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.034808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.034834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.035003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.035031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.035155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.035181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.035336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.035362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.035544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.035570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.035714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.035740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.035863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.035896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.036055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.036081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.036235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.036260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.036415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.036442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.036601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.036626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.036808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.036834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.036975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.037002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.037189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.037215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.037371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.037397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.037551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.037578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.037738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.037764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.037916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.037943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.038098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.038124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.038277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.038303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.038482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.620 [2024-07-15 13:17:21.038508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.620 qpair failed and we were unable to recover it. 00:24:59.620 [2024-07-15 13:17:21.038664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.038691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.038850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.038887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.039019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.039045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.039170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.039196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.039388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.039418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.039603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.039629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.039786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.039811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.039970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.039996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.040177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.040203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.040357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.040383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.040539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.040565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.040721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.040746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.040941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.040971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.041153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.041179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.041328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.041354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.041482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.041508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.041668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.041694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.041823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.041848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.042018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.042046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.042231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.042257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.042411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.042437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.042588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.042614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.042803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.042829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.042989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.043015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.043174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.043200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.043358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.043384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.043508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.043534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.043685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.043711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.043874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.043906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.044031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.044056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.044239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.044265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.044397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.044422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.044572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.044599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.044751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.044776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.044961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.044988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.045180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.045205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.621 qpair failed and we were unable to recover it. 00:24:59.621 [2024-07-15 13:17:21.045391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.621 [2024-07-15 13:17:21.045416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.045546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.045572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.045723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.045748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.045906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.045941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.046128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.046154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.046312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.046337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.046518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.046543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.046700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.046726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.046851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.046891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.047056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.047082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.047266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.047291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.047443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.047470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.047624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.047650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.047807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.047833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.048027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.048054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.048182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.048209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.048369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.048395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.048558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.048584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.048738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.048764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.048920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.048946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.049129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.049155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.049313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.049339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.049536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.049563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3937244 Killed "${NVMF_APP[@]}" "$@" 00:24:59.622 [2024-07-15 13:17:21.049689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.049714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.049894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.049921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:59.622 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:59.622 [2024-07-15 13:17:21.050094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.050121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.622 [2024-07-15 13:17:21.050276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.050302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:59.622 [2024-07-15 13:17:21.050432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.622 [2024-07-15 13:17:21.050458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.050585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.050611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.050777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.050802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.050958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.050984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.051172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.622 [2024-07-15 13:17:21.051197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.622 qpair failed and we were unable to recover it. 00:24:59.622 [2024-07-15 13:17:21.051364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.051394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.051547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.051573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.051755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.051780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.051926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.051954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.052130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.052156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.052345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.052372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.052527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.052553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.052712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.052743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.052901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.052928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.053062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.053089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.053285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.053311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.053469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.053494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.053648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.053673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.053804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.053830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.053997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.054023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.054179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.054205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.054375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.054400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.054534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.054559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.054736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.054762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3937802 00:24:59.623 [2024-07-15 13:17:21.054921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.054949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3937802 00:24:59.623 [2024-07-15 13:17:21.055109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.055135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3937802 ']' 00:24:59.623 [2024-07-15 13:17:21.055318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.055344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.623 [2024-07-15 13:17:21.055511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.055537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.623 [2024-07-15 13:17:21.055707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.055739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.623 [2024-07-15 13:17:21.055895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.055931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 13:17:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.623 [2024-07-15 13:17:21.056080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.056107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.056265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.056291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.056449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.056474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.056624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.056650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.056829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.623 [2024-07-15 13:17:21.056855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.623 qpair failed and we were unable to recover it. 00:24:59.623 [2024-07-15 13:17:21.057020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.057046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.057205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.057230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.057358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.057400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.057563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.057592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.057803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.057831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.057995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.058021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.058184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.058210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.058382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.058410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.058645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.058670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.058830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.058855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.059037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.059076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.059400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.059452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.059617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.059645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.059844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.059870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.060033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.060063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.060268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.060297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.060613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.060664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.060837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.060863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.061032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.061063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.061290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.061348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.061545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.061570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.061747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.061773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.061950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.061979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.062151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.062179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.062392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.062418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.062578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.062603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.062761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.062786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.062960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.062989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.063163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.063188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.063391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.063437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.624 qpair failed and we were unable to recover it. 00:24:59.624 [2024-07-15 13:17:21.063641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.624 [2024-07-15 13:17:21.063666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.063823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.063848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.064024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.064049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.064238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.064263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.064390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.064415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.064599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.064625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.064808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.064833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.065011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.065040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.065217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.065245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.065434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.065714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.065740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.065924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.065954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.066175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.066204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.066396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.066445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.066598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.066624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.066781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.066806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.066989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.067018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.067232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.067279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.067494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.067522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.067666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.067691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.067848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.067873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.068036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.068066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.068243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.068272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.068443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.068471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.068648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.068673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.068800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.068827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.068974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.069003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.069241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.069269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.069547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.069598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.069799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.069827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.070002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.070032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.070298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.070326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.070588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.070634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.070809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.070835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.071003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.071033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.071298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.071344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.071506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.071533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.071685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.071710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.071867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.071901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.072096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.072125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.625 [2024-07-15 13:17:21.072345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.625 [2024-07-15 13:17:21.072374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.625 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.072715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.072769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.072985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.073015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.073205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.073234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.073399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.073443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.073685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.073713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.073908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.073934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.074136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.074161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.074344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.074369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.074497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.074522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.074650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.074676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.074807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.074832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.074959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.074985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.075113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.075138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.075304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.075329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.075507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.075532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.075695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.075721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.075919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.075948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.076185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.076214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.076448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.076495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.076718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.076743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.076924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.076950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.077115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.077156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.077349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.077374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.077553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.077578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.077711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.077737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.077857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.077897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.078078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.078104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.078253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.078279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.078455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.078483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.078631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.078656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.078816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.078841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.079029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.079055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.079184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.079210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.079394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.079419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.079572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.079597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.079759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.079784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.079937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.079963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.080144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.080169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.626 qpair failed and we were unable to recover it. 00:24:59.626 [2024-07-15 13:17:21.080350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.626 [2024-07-15 13:17:21.080375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.080547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.080572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.080726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.080751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.080907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.080944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.081114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.081139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.081295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.081321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.081477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.081503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.081661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.081686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.081842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.081867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.082033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.082058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.082209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.082234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.082389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.082415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.082578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.082605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.082759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.082785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.082944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.082971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.083128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.083154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.083334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.083359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.083488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.083514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.083668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.083695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.083836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.083863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.084020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.084046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.084242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.084268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.084426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.084451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.084628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.084653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.084802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.084827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.084956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.084982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.085139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.085164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.085353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.085379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.085531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.085567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.085687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.085712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.085871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.085919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.086064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.086090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.086255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.086281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.086443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.086469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.086625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.086651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.086796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.086821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.086984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.087011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.087197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.087222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.087379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.087404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.087557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.087582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.627 qpair failed and we were unable to recover it. 00:24:59.627 [2024-07-15 13:17:21.087707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.627 [2024-07-15 13:17:21.087733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.087894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.087920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.088067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.088093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.088273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.088299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.088453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.088478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.088634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.088660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.088782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.088807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.088990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.089016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.089150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.089177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.089367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.089392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.089553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.089579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.089707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.089732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.089887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.089913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.090064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.090089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.090223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.090248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.090417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.090442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.090557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.090583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.090722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.090747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.090871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.090904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.091061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.091086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.091220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.091245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.091392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.091417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.091569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.091595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.091725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.091750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.091934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.091960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.092087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.092112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.092277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.092302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.092455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.092480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.092636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.092661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.092828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.092853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.093013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.093042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.093173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.093198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.093382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.093408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.093561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.093586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.628 qpair failed and we were unable to recover it. 00:24:59.628 [2024-07-15 13:17:21.093775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.628 [2024-07-15 13:17:21.093800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.093959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.093985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.094139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.094168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.094328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.094353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.094535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.094561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.094753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.094778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.094934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.094960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.095089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.095114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.095251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.095276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.095464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.095489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.095675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.095700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.095887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.095913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.096091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.096116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.096277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.096303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.096495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.096520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.096673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.096698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.096871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.096906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.097096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.097121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.097283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.097308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.097436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.097461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.097591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.097618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.097801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.097827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.097984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.098010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.098185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.098222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.098402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.098446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.098643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.098669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.098827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.098852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.099006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.099049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.099211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.099253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.099425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.099467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.099653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.099679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.099841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.099866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.100017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.100060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.100244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.100286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.100441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.100467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.100620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.100646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.100801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.100832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.101064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.101107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.101348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.101379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.101548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.101576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.101759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.629 [2024-07-15 13:17:21.101785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.629 qpair failed and we were unable to recover it. 00:24:59.629 [2024-07-15 13:17:21.101966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.101996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.102209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.102238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.102426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.102454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.102606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.102634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.102815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.102841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.103001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.103031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.103226] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:24:59.630 [2024-07-15 13:17:21.103248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.103290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.103310] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.630 [2024-07-15 13:17:21.103465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.103490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.103628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.103653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.103777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.103802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.103983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.104012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.104200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.104226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.104386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.104412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.104569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.104595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.104777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.104803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.104984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.105013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.105205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.105233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.105401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.105430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.105584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.105611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.105766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.105792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.105994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.106023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.106251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.106281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.106639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.106708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.106869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.106904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.107116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.107145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.107384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.107412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.107611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.107640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.107817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.107843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.108020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.108049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.108242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.108271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.108622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.108677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.108883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.108909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.109056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.109084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.109344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.109385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.109588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.109618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.109778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.109804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.109984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.110015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.110190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.110218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.110386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.110414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.630 qpair failed and we were unable to recover it. 00:24:59.630 [2024-07-15 13:17:21.110585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.630 [2024-07-15 13:17:21.110610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.110742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.110767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.110938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.110967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.111179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.111207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.111379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.111404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.111592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.111617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.111768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.111793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.111947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.111976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.112155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.112183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.112456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.112482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.112633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.112659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.112818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.112843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.113052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.113102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.113274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.113302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.113474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.113505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.113664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.113689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.113823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.113849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.114030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.114059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.114292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.114338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.114573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.114598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.114781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.114806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.114985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.115015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.115224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.115252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.115476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.115503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.115682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.115708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.115891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.115935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.116130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.116158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.116443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.116495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.116651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.116677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.116832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.116857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.117054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.117082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.117268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.117298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.117458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.117499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.117655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.117681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.117864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.117897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.118049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.118080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.118337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.118365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.118685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.118738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.118909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.118936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.119109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.119138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.119300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.631 [2024-07-15 13:17:21.119328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.631 qpair failed and we were unable to recover it. 00:24:59.631 [2024-07-15 13:17:21.119503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.119528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.119681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.119707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.119858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.119890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.120036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.120064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.120233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.120259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.120392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.120417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.120549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.120574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.120755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.120780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.120926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.120957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.121188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.121216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.121408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.121436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.121584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.121611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.121770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.121795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.121925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.121951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.122102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.122127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.122253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.122279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.122457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.122483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.122666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.122692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.122859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.122898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.123081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.123107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.123261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.123287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.123475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.123501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.123623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.123650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.123811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.123837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.123981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.124009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.124187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.124213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.124394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.124420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.124604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.124630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.124783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.124808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.124974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.125001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.125157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.125182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.125367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.125392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.125521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.125548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.125727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.125753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.125934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.632 [2024-07-15 13:17:21.125964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.632 qpair failed and we were unable to recover it. 00:24:59.632 [2024-07-15 13:17:21.126098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.126123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.126274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.126299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.126452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.126477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.126622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.126647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.126797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.126822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.126983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.127009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.127192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.127217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.127375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.127400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.127559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.127584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.127762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.127790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.127958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.127983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.128142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.128167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.128352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.128377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.128542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.128567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.128721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.128747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.128905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.128930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.129087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.129112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.129239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.129265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.129424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.129449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.129571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.129596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.129791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.129816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.129968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.129994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.130153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.130180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.130333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.130359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.130514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.130540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.130693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.130718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.130913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.130939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.131118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.131143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.131293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.131318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.131475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.131501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.131658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.131684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.131864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.131925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.132119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.132144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.132324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.132349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.132474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.132501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.132680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.132705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.132862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.132902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.133056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.133082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.133272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.133297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.133422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.133451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.133580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.633 [2024-07-15 13:17:21.133605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.633 qpair failed and we were unable to recover it. 00:24:59.633 [2024-07-15 13:17:21.133803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.133831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.134002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.134028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.134186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.134211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.134367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.134392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.134552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.134577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.134751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.134781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.134971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.134998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.135178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.135204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.135336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.135361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.135513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.135538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.135692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.135718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.135874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.135906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.136068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.136093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.136250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.136275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.136402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.136426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.136612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.136637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.136783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.136809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.136971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.136997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.137130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.137155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.137313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.137337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.137494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.634 [2024-07-15 13:17:21.137520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.137676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.137702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.137885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.137929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.138082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.138108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.138266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.138291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.138450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.138475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.138603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.138629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.138781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.138807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.138955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.138982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.139142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.139167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.139312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.139337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.139492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.139518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.139654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.139679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.139833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.139860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.140065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.140090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.140215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.140241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.140369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.140394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.140548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.140573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.140729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.140760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.140938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.140965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.141126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.634 [2024-07-15 13:17:21.141152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.634 qpair failed and we were unable to recover it. 00:24:59.634 [2024-07-15 13:17:21.141303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.141330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.141485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.141510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.141668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.141694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.141823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.141849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.141999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.142025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.142201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.142227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.142361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.142386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.142537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.142562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.142710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.142735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.142890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.142917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.143048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.143074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.143242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.143267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.143423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.143448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.143606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.143631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.143772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.143798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.143947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.143973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.144095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.144120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.144271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.144297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.144452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.144477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.144629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.144654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.144777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.144802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.144935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.144960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.145122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.145147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.145267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.145292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.145448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.145473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.145633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.145658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.145816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.145842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.146003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.146029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.146182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.146209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.146336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.146362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.146542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.146567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.146744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.146770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.146933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.146960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.147142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.147167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.147317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.147342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.147469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.147496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.147631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.147656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.147835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.147865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.148000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.148025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.148184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.148209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.635 [2024-07-15 13:17:21.148389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.635 [2024-07-15 13:17:21.148414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.635 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.148537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.148562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.148743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.148769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.148889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.148915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.149045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.149070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.149224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.149250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.149403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.149429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.149582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.149607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.149737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.149762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.149931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.149956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.150093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.150118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.150242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.150267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.150389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.150414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.150568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.150593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.150749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.150774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.150901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.150927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.151105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.151131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.151278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.151304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.151433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.151458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.151608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.151634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.151795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.151821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.151978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.152004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.152157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.152182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.152336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.152361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.152524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.152549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.152684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.152709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.152866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.152899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.153049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.153074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.153229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.153254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.153431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.153456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.153611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.153637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.153817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.153842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.154027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.154052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.154175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.154205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.154360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.154386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.636 qpair failed and we were unable to recover it. 00:24:59.636 [2024-07-15 13:17:21.154539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.636 [2024-07-15 13:17:21.154564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.154718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.154743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.154889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.154920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.155102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.155128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.155259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.155284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.155464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.155489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.155648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.155673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.155828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.155855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.156040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.156067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.156227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.156252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.156398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.156424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.156611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.156636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.156767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.156792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.156945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.156971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.157098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.157124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.157310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.157335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.157528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.157554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.157706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.157732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.157896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.157922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.158048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.158073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.158244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.158269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.158432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.158457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.158637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.158662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.158788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.158813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.159000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.159026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.159185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.159210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.159398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.159423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.159551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.159577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.159733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.159759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.159944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.159970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.160098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.160125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.160253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.160278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.160399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.160424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.160576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.160601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.160728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.160755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.160874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.160906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.161088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.161113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.161295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.161320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.161475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.161500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.161684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.161709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.161864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.637 [2024-07-15 13:17:21.161896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.637 qpair failed and we were unable to recover it. 00:24:59.637 [2024-07-15 13:17:21.162051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.162077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.162239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.162268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.162425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.162452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.162632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.162657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.162789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.162816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.162999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.163025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.163179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.163205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.163386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.163411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.163594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.163619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.163771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.163796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.163981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.164007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.164137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.164162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.164319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.164344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.164493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.164518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.164641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.164666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.164849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.164874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.165071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.165097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.165241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.165266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.165420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.165447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.165566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.165591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.165742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.165768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.165926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.165951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.166104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.166129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.166248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.166274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.166429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.166455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.166608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.166633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.166793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.166818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.167002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.167028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.167189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.167215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.167372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.167398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.167551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.167577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.167705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.167732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.167891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.167918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.168079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.168104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.168263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.168288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.168440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.168466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.168598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.168624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.168803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.168828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.168985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.169011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.169164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.169190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.169317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.638 [2024-07-15 13:17:21.169344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.638 qpair failed and we were unable to recover it. 00:24:59.638 [2024-07-15 13:17:21.169526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.169555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.169686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.169711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.169872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.169905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.170083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.170108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.170293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.170318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.170449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.170474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.170602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.170629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.170750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.170776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.170956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.170983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.171164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.171189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.171343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.171369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.171525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.171550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.171670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.171695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.171814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.171839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.171974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.172001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.172008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.639 [2024-07-15 13:17:21.172122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.172147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.172269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.172294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.172480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.172505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.172659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.172685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.172840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.172866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.173012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.173038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.173191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.173217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.173339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.173364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.173519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.173546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.173701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.173728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.173861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.173893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.174074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.174100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.174262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.174288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.174447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.174473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.174633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.174658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.174838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.174864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.175092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.175118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.175277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.175302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.175434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.175460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.175587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.175612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.175762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.175787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.175945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.175972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.176128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.176153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.176305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.176331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.176450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.176475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.639 qpair failed and we were unable to recover it. 00:24:59.639 [2024-07-15 13:17:21.176607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.639 [2024-07-15 13:17:21.176632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.176793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.176818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.176950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.176977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.177130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.177156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.177315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.177340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.177494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.177520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.177666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.177691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.177816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.177843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.177994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.178020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.178203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.178229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.178404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.178430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.178582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.178607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.178729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.178755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.178936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.178966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.179125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.179150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.179274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.179299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.179430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.179455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.179587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.179617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.179782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.179807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.179963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.179989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.180119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.180145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.180304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.180331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.180479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.180505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.180632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.180658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.180812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.180839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.181004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.181030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.181223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.181249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.181420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.181446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.181671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.181697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.181848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.181892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.182049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.182074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.182236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.182261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.182423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.182448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.182580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.182606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.182787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.182813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.640 [2024-07-15 13:17:21.182974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.640 [2024-07-15 13:17:21.183000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.640 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.183186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.183211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.183375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.183400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.183591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.183617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.183772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.183798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.183956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.183982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.184110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.184136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.184299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.184326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.184454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.184479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.184607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.184633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.184764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.184789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.184937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.184963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.185095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.185122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.185252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.185277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.185457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.185483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.185641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.185667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.185816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.185842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.186011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.186037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.186238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.186267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.186430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.186455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.186633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.186659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.186790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.186816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.186969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.186996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.187127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.187153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.187306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.187332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.187513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.187539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.187726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.187752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.187887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.187913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.188040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.188065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.188196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.188224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.188382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.188408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.188591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.188617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.188749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.188777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.188931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.188958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.189120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.189146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.189295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.189321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.189473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.189499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.189622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.189647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.189781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.189808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.641 qpair failed and we were unable to recover it. 00:24:59.641 [2024-07-15 13:17:21.189994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.641 [2024-07-15 13:17:21.190020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.190193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.190218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.190381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.190406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.190565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.190590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.190740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.190766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.190910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.190936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.191120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.191162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.191307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.191334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.191491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.191516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.191656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.191681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.191861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.191893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.192055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.192081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.192207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.192233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.192364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.192390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.192539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.192565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.192724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.192750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.192899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.192925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.193075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.193101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.193257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.193283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.193437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.193462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.193618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.193644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.193821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.193847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.194016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.194044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.194174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.194200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.194385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.194412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.194585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.194610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.194789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.194814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.194974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.195001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.195133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.195159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.195340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.195365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.195514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.195539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.195700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.195726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.195852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.195885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.196020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.196046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.196203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.196230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.196386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.196413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.196568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.196595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.196777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.196803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.196963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.196990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.642 [2024-07-15 13:17:21.197141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.642 [2024-07-15 13:17:21.197167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.642 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.197313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.197339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.197494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.197519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.197644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.197669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.197821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.197846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.197984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.198010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.198147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.198182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.198304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.198334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.198518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.198544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.198673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.198698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.198822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.198849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.198986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.199013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.199136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.199161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.199340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.199365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.199519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.199544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.199718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.199744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.199896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.199922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.200103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.200129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.200262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.200288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.200414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.200439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.200622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.200648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.200812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.200838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.201002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.201030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.201209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.201235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.201389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.201414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.201562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.201588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.201717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.201741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.201922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.201948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.202076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.202102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.202285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.202310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.202440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.202467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.202645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.202671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.202809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.202835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.202980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.203007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.203167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.203194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.203320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.203345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.203468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.203494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.203626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.203652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.203805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.203831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.204016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.204043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.204206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.204232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.643 [2024-07-15 13:17:21.204362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.643 [2024-07-15 13:17:21.204388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.643 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.204548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.204573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.204692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.204718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.204868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.204900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.205059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.205085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.205210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.205235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.205365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.205398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.205555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.205581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.205765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.205791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.205944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.205971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.206134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.206161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.206345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.206371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.206511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.206537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.206685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.206711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.206847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.206873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.207033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.207059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.207218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.207244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.207376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.207402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.207580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.207606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.207763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.207791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.207955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.207982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.208105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.208132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.208284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.208311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.208474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.208499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.208652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.208678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.208807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.208833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.209000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.209026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.209209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.209235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.209385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.209411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.209559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.209584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.209751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.209777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.209971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.209998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.210181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.210207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.210333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.210359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.210504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.210529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.210664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.210691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.210845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.210872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.211043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.211069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.211224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.211250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.644 [2024-07-15 13:17:21.211409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.644 [2024-07-15 13:17:21.211435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.644 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.211586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.211612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.211791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.211817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.211948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.211974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.212128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.212155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.212336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.212362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.212516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.212544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.212701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.212730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.212888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.212914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.213048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.213075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.213238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.213265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.213419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.213445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.213572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.213599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.213755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.213781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.213948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.213975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.214100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.214125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.214282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.214307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.214484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.214510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.214666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.214692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.214818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.214843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.215007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.215034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.215164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.215191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.215350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.215375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.215518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.215544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.215667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.215693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.215839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.215865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.216024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.216049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.216208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.216234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.216389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.216415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.216571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.216598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.216765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.216792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.216914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.216951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.217078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.217104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.217255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.217281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.645 [2024-07-15 13:17:21.217441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.645 [2024-07-15 13:17:21.217467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.645 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.217617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.217643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.217770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.217795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.217947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.217974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.218108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.218135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.218266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.218291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.218450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.218478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.218628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.218654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.218791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.218817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.218968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.218994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.219155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.219181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.219311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.219337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.219486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.219512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.219655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.219685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.219809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.219835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.219959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.219985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.220137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.220163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.220339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.220365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.220522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.220548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.220670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.220696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.220852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.220884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.221015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.221043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.221194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.221220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.221376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.221403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.221557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.221583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.221733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.221759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.221943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.221970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.222102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.222127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.222282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.222309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.222460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.222486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.222608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.222634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.222814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.222840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.223006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.223033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.223185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.223211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.223340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.223366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.223515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.223541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.223667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.223694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.223826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.223852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.224011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.224037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.224184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.646 [2024-07-15 13:17:21.224210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.646 qpair failed and we were unable to recover it. 00:24:59.646 [2024-07-15 13:17:21.224344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.224371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.224498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.224524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.224713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.224739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.224893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.224920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.225056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.225082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.225267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.225294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.225426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.225453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.225577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.225603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.225759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.225785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.225916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.225943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.226107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.226133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.226316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.226342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.226492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.226517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.226651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.226681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.226840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.226866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.227024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.227051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.227255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.227280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.227430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.227456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.227636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.227662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.227790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.227816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.227973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.227999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.228200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.228226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.228378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.228404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.228529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.228555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.228680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.228707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.228862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.228895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.229058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.229083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.229241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.229266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.229424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.229450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.229571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.229598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.229779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.229806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.229936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.229963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.230093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.230119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.230282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.230308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.230465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.230491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.230668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.230694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.230817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.230843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.231029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.231055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.231285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.231310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.231436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.647 [2024-07-15 13:17:21.231461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.647 qpair failed and we were unable to recover it. 00:24:59.647 [2024-07-15 13:17:21.231599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.231625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.231808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.231834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.232001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.232028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.232184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.232210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.232365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.232391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.232544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.232570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.232700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.232725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.232900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.232926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.233057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.233082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.233262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.233288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.233470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.233495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.233651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.233677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.233836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.233862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.234006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.234034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.234197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.234222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.234413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.234438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.234587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.234613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.234746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.234771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.234931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.234958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.235122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.235148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.235300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.235326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.235451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.235476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.235662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.235688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.235843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.235869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.236037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.236063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.236215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.236241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.236363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.236389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.236516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.236541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.236671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.236697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.236847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.236872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.237009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.237035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.237160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.237186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.237314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.237340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.237495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.237522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.237655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.237681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.237841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.237872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.238006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.238031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.238195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.238221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.238373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.238399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.238555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.238581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.648 [2024-07-15 13:17:21.238734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.648 [2024-07-15 13:17:21.238764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.648 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.238913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.238940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.239093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.239119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.239243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.239269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.239403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.239428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.239622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.239647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.239775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.239800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.239973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.239999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.240156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.240182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.240309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.240334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.240493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.240518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.240650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.240675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.240807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.240832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.240967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.240994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.241160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.241186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.241349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.241375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.241497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.241522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.241680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.241706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.241858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.241895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.242075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.242101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.242245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.242271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.242432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.242459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.242650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.242676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.242864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.242911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.243041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.243068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.243223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.243249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.243406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.243431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.243595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.243621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.243803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.243829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.243953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.243980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.244138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.244288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.244314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.244472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.244497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.244652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.244679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.244830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.244856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.649 qpair failed and we were unable to recover it. 00:24:59.649 [2024-07-15 13:17:21.245016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.649 [2024-07-15 13:17:21.245042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.245190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.245216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.245394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.245420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.245580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.245606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.245786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.245811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.245951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.245983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.246170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.246203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.246386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.246412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.246562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.246589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.246745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.246770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.246924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.246950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.247083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.247110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.247283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.247308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.247474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.247679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.247704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.247860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.247891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.248020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.248046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.248174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.248199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.248349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.248375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.248550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.248576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.248724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.248749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.248917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.248943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.249096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.249121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.249248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.249273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.249398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.249424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.249578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.249603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.249785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.249810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.249943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.249970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.250118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.250143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.250291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.250317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.250469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.250495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.250619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.250646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.250798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.250823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.250972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.250998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.251181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.251206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.251359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.251385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.251515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.251540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.251692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.251718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.251880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.251907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.252026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.252052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.252231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.650 [2024-07-15 13:17:21.252256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.650 qpair failed and we were unable to recover it. 00:24:59.650 [2024-07-15 13:17:21.252386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.252411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.252545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.252570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.252698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.252723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.252874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.252906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.253067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.253096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.253264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.253289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.253470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.253496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.253657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.253683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.253869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.253901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.254026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.254051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.254177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.254202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.254324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.254350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.254473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.254499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.254650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.254676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.254804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.254831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.254969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.254996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.255147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.255173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.255355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.255381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.255575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.255601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.255727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.255754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.255890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.255918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.256074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.256101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.256292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.256317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.256446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.256471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.256596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.256622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.256777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.256803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.256989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.257015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.257147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.257173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.257349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.257375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.257495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.257521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.257707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.257733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.257894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.257920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.258072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.258100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.258258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.258284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.258471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.258497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.258630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.258656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.258805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.258831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.259013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.259040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.259192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.259218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.259344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.259371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.259533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.259558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.259753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.259780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.651 [2024-07-15 13:17:21.259956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.651 [2024-07-15 13:17:21.259983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.651 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.260136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.260163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.260285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.260315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.260441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.260467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.260618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.260644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.260800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.260825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.260963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.260990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.261146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.261171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.261299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.261325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.261474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.261500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.261654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.261679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.261824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.261850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.262019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.262045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.262192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.262217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.262370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.262395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.262547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.262575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.262771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.262797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.262983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.263010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.263134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.263160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.263316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.263343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.263526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.263552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.263709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.263734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.263854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.263887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.264032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.264058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.264214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.264239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.264418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.264444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.264613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.264640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.264788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.264814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.264969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.264996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.265159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.265184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.265334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.265360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.265550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.265576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.265704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.265729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.265850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.265894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.266043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.266069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.266220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.266246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.266427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.266453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.266597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.266623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.266752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.266779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.266933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.266960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.267114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.267140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.267261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.267287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.267446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.267476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.267603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.267630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.267789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.652 [2024-07-15 13:17:21.267814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.652 qpair failed and we were unable to recover it. 00:24:59.652 [2024-07-15 13:17:21.267947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.267974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.268103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.268129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.268309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.268335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.268510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.268536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.268666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.268691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.268814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.268842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.268985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.269012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.269195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.269222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.269345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.269372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.269504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.269530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.269682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.269708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.269868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.269902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.270081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.270107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.270269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.270294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.270444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.270469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.270598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.270624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.270771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.270797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.270942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.270968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.271084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.271110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.271299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.271325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.271451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.271477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.271632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.271658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.271807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.271833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.271963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.271990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.272168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.272193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.272350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.272377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.272532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.272558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.272682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.272707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.272866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.272897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.273046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.273071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.273240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.273266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.273451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.273478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.273631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.273657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.273812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.273837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.273978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.274004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.274157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.274183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.274328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.274353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.274535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.274564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.653 [2024-07-15 13:17:21.274693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.653 [2024-07-15 13:17:21.274719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.653 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.274848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.274874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.275007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.275033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.275184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.275210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.275336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.275362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.275522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.275548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.275673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.275700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.275891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.275918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.276042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.276067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.276219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.276245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.276396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.276421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.276597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.276623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.276765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.276790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.276931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.276958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.277116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.277142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.277320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.277346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.277494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.277520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.277681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.277707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.277837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.277863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.277999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.278025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.278169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.278203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.278330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.278355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.278537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.278563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.278694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.278721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.278864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.278895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.279087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.279112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.279255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.279281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.279446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.279473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.279633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.279658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.279794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.279820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.279973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.279999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.280151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.654 [2024-07-15 13:17:21.280176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.654 qpair failed and we were unable to recover it. 00:24:59.654 [2024-07-15 13:17:21.280358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.280383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.280545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.280570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.280722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.280748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.280884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.280910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.281046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.281072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.281256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.281281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.281430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.281455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.281628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.281658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.281812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.281838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.282003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.282031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.282187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.282212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.282365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.282391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.282548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.282574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.282761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.282787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.282915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.282941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.655 [2024-07-15 13:17:21.283089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.655 [2024-07-15 13:17:21.283115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.655 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.283314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.283351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.283486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.283512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.283686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.283711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.283864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.283900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.284041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.284075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.284246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.284272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.284406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.284432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.284588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.284614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.284771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.284797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.284932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.284959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.285087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.285113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.285264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.285289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.285420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.285446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.285572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.285598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.285723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.285750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.285886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.285913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.286047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.286072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.286200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.286225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.286277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14000e0 (9): Bad file descriptor 00:24:59.941 [2024-07-15 13:17:21.286465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.286505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.286634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.286661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.286791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.286818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.286910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.941 [2024-07-15 13:17:21.286947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.941 [2024-07-15 13:17:21.286962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.941 [2024-07-15 13:17:21.286974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.941 [2024-07-15 13:17:21.286975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.286984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.941 [2024-07-15 13:17:21.287001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.941 qpair failed and we were unable to recover it. 00:24:59.941 [2024-07-15 13:17:21.287041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:59.941 [2024-07-15 13:17:21.287131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.941 [2024-07-15 13:17:21.287093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:59.941 [2024-07-15 13:17:21.287096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:59.942 [2024-07-15 13:17:21.287156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.287067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:59.942 [2024-07-15 13:17:21.287288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.287314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.287442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.287469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.287635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.287661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.287797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.287823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.287982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.288009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.288167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.288193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.288326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.288351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.288474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.288499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.288651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.288677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.288828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.288854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.289039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.289068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.289184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.289209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.289334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.289360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.289498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.289523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.289658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.289683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.289814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.289840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.289974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.290007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.290159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.290192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.290326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.290352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.290481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.290506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.290711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.290736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.290907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.290933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.291073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.291099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.291235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.291260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.291456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.291482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.291634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.291661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.291780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.291806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.291959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.291985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.292118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.292144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.292263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.292288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.292457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.292482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.292641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.292672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.292825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.292850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.293022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.293048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.293174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.293199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.293331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.293357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.293487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.293513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.293749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.293775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.293908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.293935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.294060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.294086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.294236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.294261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.294410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.294435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.294589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.942 [2024-07-15 13:17:21.294615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.942 qpair failed and we were unable to recover it. 00:24:59.942 [2024-07-15 13:17:21.294826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.294851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.294993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.295019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.295161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.295187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.295361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.295386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.295516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.295541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.295674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.295700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.295824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.295849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.296064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.296090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.296243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.296268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.296402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.296427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.296560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.296585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.296709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.296736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.296919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.296951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.297083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.297108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.297272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.297297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.297437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.297463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.297584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.297610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.297791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.297816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.297936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.297962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.298113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.298138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.298270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.298296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.298449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.298474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.298630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.298655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.298784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.298811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.298996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.299039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.299213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.299240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.299386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.299412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.299538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.299564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.299692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.299725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.299899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.299937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.300101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.300127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.300263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.300290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.300424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.300451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.300600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.300626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.300780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.300806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.300949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.300975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.301158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.301192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.301323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.301349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.301492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.301518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.301666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.943 [2024-07-15 13:17:21.301692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.943 qpair failed and we were unable to recover it. 00:24:59.943 [2024-07-15 13:17:21.301846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.301871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.302019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.302045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.302175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.302201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.302333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.302359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.302512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.302538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.302658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.302683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.302831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.302857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.303008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.303034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.303163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.303190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.303353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.303379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.303507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.303533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.303673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.303698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.303823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.303849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.304025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.304052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.304189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.304214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.304379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.304406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.304568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.304593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.304716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.304742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.304890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.304917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.305081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.305107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.305242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.305271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.305521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.305547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.305670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.305696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.305850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.305881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.306022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.306047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.306184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.306220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.306384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.306411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.306539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.306565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.306729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.306759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.306886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.306914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.307044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.307071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.307238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.307268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.307391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.307417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.307593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.307619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.307775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.307800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.307956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.307983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.308140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.308165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.308327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.308353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.308474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.308500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.308636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.308661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.944 qpair failed and we were unable to recover it. 00:24:59.944 [2024-07-15 13:17:21.308819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.944 [2024-07-15 13:17:21.308845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.308976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.309002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.309193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.309219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.309375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.309402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.309531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.309557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.309694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.309720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.309881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.309907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.310062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.310088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.310247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.310273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.310437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.310463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.310620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.310648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.310778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.310804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.310941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.310967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.311116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.311142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.311263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.311289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.311466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.311504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.311674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.311701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.311834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.311859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.312023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.312049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.312205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.312231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.312386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.312412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.312579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.312605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.312739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.312766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.312948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.312975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.313166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.313192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.313358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.313384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.313598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.313624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.313776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.313802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.313968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.314000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.314217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.314243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.314397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.314422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.314560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.314585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.314712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.314738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.945 qpair failed and we were unable to recover it. 00:24:59.945 [2024-07-15 13:17:21.314895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.945 [2024-07-15 13:17:21.314921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.315077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.315102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.315259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.315285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.315411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.315437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.315570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.315595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.315724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.315751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.315903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.315942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.316085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.316112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.316247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.316273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.316407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.316432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.316586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.316611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.316748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.316774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.316905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.316931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.317056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.317081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.317199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.317224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.317386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.317413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.317543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.317568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.317718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.317744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.317884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.317910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.318065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.318091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.318244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.318270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.318403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.318430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.318564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.318596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.318752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.318778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.318908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.318935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.319072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.319098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.319230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.319255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.319410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.319436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.319564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.319589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.319772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.319814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.319999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.320037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.320207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.320234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.320372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.320401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.320538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.320563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.320688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.320713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.320851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.320893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.321024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.321049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.321177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.321202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.321339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.321364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.321508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.946 [2024-07-15 13:17:21.321534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.946 qpair failed and we were unable to recover it. 00:24:59.946 [2024-07-15 13:17:21.321656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.321681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.321802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.321827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.321970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.322008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.322162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.322189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.322314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.322339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.322490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.322515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.322644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.322669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.322791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.322816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.322941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.322966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.323110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.323136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.323265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.323292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.323426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.323452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.323581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.323607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.323733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.323758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.323916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.323942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.324073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.324098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.324224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.324248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.324402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.324427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.324559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.324584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.324710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.324736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.324916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.324942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.325078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.325103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.325230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.325256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.325385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.325410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.325588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.325613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.325753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.325792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.325925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.325953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.326158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.326184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.326313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.326339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.326464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.326490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.326655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.326681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.326811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.326837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.326972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.326998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.327129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.327154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.327273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.327298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.327419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.327444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.327573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.327598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.327760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.327785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.327934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.327960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.328117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.328143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.947 [2024-07-15 13:17:21.328264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.947 [2024-07-15 13:17:21.328289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.947 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.328441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.328466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.328601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.328627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.328746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.328771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.328895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.328921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.329130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.329156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.329324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.329363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.329520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.329546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.329702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.329728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.329895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.329929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.330061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.330089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.330243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.330270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.330396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.330423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.330608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.330634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.330760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.330786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.330945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.330971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.331101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.331128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.331266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.331292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.331415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.331441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.331594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.331621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.331753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.331779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.331953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.331980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.332114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.332139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.332303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.332330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.332456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.332482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.332637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.332664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.332846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.332872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.333007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.333033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.333163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.333189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.333318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.333344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.333481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.333507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.333664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.333690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.333818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.333843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.333994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.334021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.334174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.334200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.334318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.334343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.334506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.334532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.334660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.334686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.334816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.334841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.335003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.335029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.335154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.335180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.948 qpair failed and we were unable to recover it. 00:24:59.948 [2024-07-15 13:17:21.335319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.948 [2024-07-15 13:17:21.335346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.335463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.335489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.335626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.335652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.335840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.335867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.336007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.336033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.336216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.336242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.336396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.336422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.336558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.336584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.336767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.336792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.336953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.336980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.337221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.337247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.337399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.337425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.337556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.337582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.337763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.337789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.337919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.337947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.338080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.338106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.338224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.338249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.338373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.338399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.338565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.338591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.338747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.338773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.338905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.338931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.339054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.339080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.339219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.339245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.339367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.339392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.339546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.339571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.339701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.339727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.339872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.339906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.340060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.340086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.340246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.340272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.340456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.340483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.340635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.340661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.340868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.340900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.341055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.341081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.341206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.341232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.949 [2024-07-15 13:17:21.341358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.949 [2024-07-15 13:17:21.341385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.949 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.341544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.341574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.341732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.341758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.341916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.341942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.342108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.342134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.342259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.342286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.342421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.342447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.342595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.342620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.342750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.342777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.342930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.342957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.343094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.343119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.343278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.343305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.343431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.343457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.343610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.343635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.343763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.343789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.343927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.343953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.344104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.344129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.344298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.344324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.344450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.344477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.344603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.344629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.344785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.344811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.344976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.345002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.345157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.345183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.345326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.345352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.345506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.345531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.345660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.345685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.345818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.345844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.345971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.345997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.346135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.346161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.346283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.346309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.346466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.346493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.346647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.346674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.346805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.346831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.346994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.347020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.347139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.347165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.347316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.347342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.347494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.347520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.347672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.347697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.347829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.347856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.348016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.348042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.348194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.348220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.950 qpair failed and we were unable to recover it. 00:24:59.950 [2024-07-15 13:17:21.348377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.950 [2024-07-15 13:17:21.348407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.348532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.348560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.348714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.348741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.348887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.348915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.349090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.349116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.349238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.349265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.349400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.349426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.349671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.349696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.349882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.349909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.350063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.350089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.350207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.350233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.350470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.350496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.350632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.350658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.350800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.350826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.350967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.350994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.351124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.351150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.351307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.351333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.351463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.351489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.351608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.351634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.351760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.351787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.351967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.351994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.352161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.352188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.352346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.352372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.352551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.352577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.352733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.352759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.352906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.352933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.353067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.353094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.353260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.353286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.353467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.353494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.353646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.353673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.353833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.353860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.354010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.354036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.354216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.354242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.354402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.354429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.354562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.354588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.354743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.354770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.354932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.354959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.355137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.355164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.355294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.355321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.355484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.355510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.355668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.951 [2024-07-15 13:17:21.355698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.951 qpair failed and we were unable to recover it. 00:24:59.951 [2024-07-15 13:17:21.355820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.355845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.355986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.356012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.356132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.356158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.356278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.356303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.356460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.356487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.356612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.356637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.356765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.356791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.356977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.357003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.357123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.357148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.357326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.357352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.357507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.357533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.357664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.357689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.357809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.357835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.358024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.358050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.358205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.358230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.358364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.358390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.358551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.358577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.358728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.358755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.358913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.358940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.359067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.359093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.359219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.359244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.359424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.359449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.359601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.359627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.359787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.359814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.359967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.359994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.360178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.360204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.360343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.360369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.360496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.360522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.360710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.360736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.360950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.360977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.361128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.361154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.361289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.361317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.361477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.361503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.361684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.361710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.361842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.361868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.362038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.362064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.362210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.362236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.362392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.362418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.362568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.362594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.362762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.362797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.362933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.952 [2024-07-15 13:17:21.362961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.952 qpair failed and we were unable to recover it. 00:24:59.952 [2024-07-15 13:17:21.363114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.363140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.363305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.363330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.363509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.363535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.363659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.363685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.363812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.363838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.363975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.364001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.364128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.364153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.364308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.364335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.364516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.364542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.364666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.364692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.364830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.364856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.365022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.365060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.365201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.365230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.365362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.365389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.365547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.365573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.365731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.365759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.365901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.365928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.366113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.366139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.366265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.366291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.366419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.366445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.366572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.366597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.366751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.366777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.366904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.366930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.367088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.367113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.367268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.367294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.367463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.367488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.367618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.367643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.367827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.367852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.367985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.368011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.368146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.368171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.368297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.368322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.368474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.368500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.368625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.368652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.368803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.368829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.368971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.368997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.369158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.369183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.369312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.369337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.369502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.369528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.369681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.369711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.369840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.369866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.953 qpair failed and we were unable to recover it. 00:24:59.953 [2024-07-15 13:17:21.370017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.953 [2024-07-15 13:17:21.370043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.370175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.370202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.370355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.370380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.370514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.370539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.370791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.370817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.370977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.371005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.371130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.371157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.371298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.371325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.371479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.371505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.371668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.371693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.371849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.371874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.372015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.372041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.372197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.372224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.372382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.372407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.372570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.372596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.372717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.372742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.372898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.372924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.373082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.373107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.373231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.373257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.373384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.373409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.373564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.373591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.373737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.373763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.373899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.373925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.374076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.374102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.374254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.374280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.374432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.374458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.374610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.374637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.374762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.374789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.374926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.374953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.375133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.375159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.375284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.375310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.375489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.375514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.375693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.375719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.375851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.375883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.376010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.954 [2024-07-15 13:17:21.376037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.954 qpair failed and we were unable to recover it. 00:24:59.954 [2024-07-15 13:17:21.376171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.376197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.376324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.376351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.376512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.376538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.376666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.376696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.376833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.376858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.377027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.377052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.377209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.377234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.377404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.377431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.377568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.377594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.377739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.377765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.377894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.377922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.378085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.378111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.378263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.378288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.378439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.378464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.378622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.378649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.378808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.378834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.378994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.379020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.379181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.379207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.379361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.379386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.379515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.379541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.379660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.379686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.379852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.379884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.380014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.380039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.380196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.380222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.380349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.380375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.380536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.380562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.380696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.380722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.380848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.380873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.381019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.381045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.381177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.381202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.381360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.381386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.381537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.381562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.381700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.381727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.381885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.381912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.382062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.382088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.382250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.382275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.382427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.382452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.382577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.382603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.382728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.382753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.382934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.382960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.955 [2024-07-15 13:17:21.383081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.955 [2024-07-15 13:17:21.383106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.955 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.383230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.383255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.383374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.383401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.383529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.383559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.383679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.383705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.383824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.383850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.384011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.384038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.384169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.384194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.384346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.384371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.384500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.384526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.384649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.384675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.384798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.384825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.384952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.384978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.385131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.385157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.385283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.385308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.385438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.385464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.385587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.385613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.385769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.385795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.385977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.386003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.386121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.386147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.386272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.386298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.386451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.386476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.386612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.386638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.386815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.386840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.386975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.387001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.387155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.387181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.387330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.387356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.387506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.387531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.387681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.387706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.387862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.387895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.388075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.388117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.388292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.388327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.388464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.388491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.388648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.388674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.388805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.388830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.388992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.389018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.389181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.389207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.389361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.389387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.389549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.389575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.389706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.956 [2024-07-15 13:17:21.389732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.956 qpair failed and we were unable to recover it. 00:24:59.956 [2024-07-15 13:17:21.389892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.389921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.390060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.390087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.390214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.390240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.390391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.390422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.390545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.390571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.390727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.390753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.390885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.390912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.391066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.391091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.391220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.391245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.391373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.391400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.391555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.391580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.391736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.391762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.391893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.391920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.392055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.392082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.392202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.392228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.392359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.392386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.392504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.392529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.392685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.392710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.392859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.392905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.393079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.393108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.393270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.393297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.393452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.393478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.393603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.393629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.393754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.393781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.393941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.393968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.394152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.394178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.394313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.394339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.394498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.394525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.394707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.394733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.394863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.394895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.395026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.395054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.395236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.395262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.395420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.395446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.395567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.395595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.395754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.395780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.395936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.395963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.396100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.396126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.396255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.396281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.396413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.396440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.396597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.396623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.957 [2024-07-15 13:17:21.396777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.957 [2024-07-15 13:17:21.396803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.957 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.396926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.396952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.397115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.397140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.397258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.397288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.397439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.397464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.397623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.397648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.397794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.397819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.397971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.397998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.398121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.398146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.398329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.398355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.398539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.398565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.398689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.398714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.398902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.398929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.399058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.399084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.399206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.399232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.399353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.399379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.399522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.399547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.399707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.399733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.399855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.399888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.400031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.400056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.400214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.400240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.400365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.400390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.400524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.400549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.400681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.400706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.400858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.400891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.401031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.401057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.401208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.401233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.401361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.401386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.401509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.401535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.401655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.401680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.401855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.401886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.402017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.402043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.402217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.402242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.402367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.402392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.402557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.402582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.958 qpair failed and we were unable to recover it. 00:24:59.958 [2024-07-15 13:17:21.402733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.958 [2024-07-15 13:17:21.402758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.402891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.402918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.403048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.403074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.403230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.403256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.403381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.403407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.403531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.403557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.403683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.403709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.403829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.403855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.404004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.404034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.404169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.404194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.404318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.404344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.404464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.404490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.404616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.404641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.404784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.404809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.404938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.404964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.405116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.405141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.405270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.405296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.405420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.405446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.405565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.405590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.405732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.405757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.405913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.405939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.406090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.406115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.406268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.406294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.406435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.406460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.406579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.406604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.406732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.406758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.406911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.406938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.407063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.407088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.407234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.407259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.407386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.407412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.407533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.407559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.407707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.407732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.407858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.407890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.408029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.408054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.408204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.408229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.408389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.408414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.408531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.408557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.408706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.408732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.408898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.408924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.409057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.409084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.959 [2024-07-15 13:17:21.409238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.959 [2024-07-15 13:17:21.409264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.959 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.409415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.409441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.409581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.409607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.409760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.409785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.409935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.409961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.410081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.410106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.410265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.410290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.410416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.410441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.410593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.410623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.410775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.410801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.410937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.410963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.411115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.411140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.411256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.411282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.411409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.411435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.411570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.411596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.411731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.411757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.411934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.411961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.412079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.412106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.412231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.412257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.412410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.412436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.412599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.412624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.412751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.412777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.412917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.412944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.413067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.413093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.413257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.413283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.413409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.413435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.413586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.413612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.413738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.413763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.413903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.413930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.414052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.414078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.414225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.414250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.414386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.414411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.414561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.414587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.414713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.414739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.414871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.414903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.415062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.415087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.415212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.415237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.415358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.415384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.415508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.415535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.415715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.415741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.415906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.415932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.960 [2024-07-15 13:17:21.416114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.960 [2024-07-15 13:17:21.416139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.960 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.416261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.416287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.416444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.416470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.416619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.416645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.416763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.416788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.416918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.416945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.417079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.417106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.417224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.417253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.417406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.417432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.417554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.417579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.417742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.417767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.417921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.417947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.418098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.418123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.418273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.418299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.418424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.418449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.418592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.418617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.418743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.418770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.418927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.418953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.419077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.419103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.419265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.419291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.419445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.419469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.419618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.419644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.419760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.419786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.419941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.419968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.420092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.420117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.420262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.420287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.420415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.420440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.420564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.420589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.420708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.420733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.420886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.420911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.421031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.421056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.421182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.421207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.421356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.421381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.421538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.421563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.421711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.421737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.421865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.421898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.422051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.422076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.422198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.422223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.422384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.422409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.422537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.422563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.422713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.961 [2024-07-15 13:17:21.422738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.961 qpair failed and we were unable to recover it. 00:24:59.961 [2024-07-15 13:17:21.422864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.422902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.423052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.423077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.423203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.423229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.423389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.423414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.423544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.423569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.423688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.423713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.423829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.423855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.424005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.424030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.424183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.424208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.424331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.424357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.424493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.424518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.424671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.424697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.424834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.424859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.425023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.425049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.425197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.425222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.425341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.425367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.425491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.425517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.425664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.425690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.425839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.425864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.426002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.426029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.426161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.426187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.426370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.426396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.426540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.426565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.426693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.426718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.426889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.426915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.427040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.427065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.427189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.427214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.427335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.427360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.427513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.427538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.427688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.427713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.427836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.427861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.428026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.428051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.428177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.428203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.428353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.428382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.428530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.428556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.962 qpair failed and we were unable to recover it. 00:24:59.962 [2024-07-15 13:17:21.428678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.962 [2024-07-15 13:17:21.428703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.428827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.428853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.429002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.429027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.429181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.429207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.429340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.429365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.429518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.429543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.429668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.429694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.429826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.429853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.430011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.430037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.430167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.430192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.430361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.430386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.430508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.430534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.430688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.430713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.430868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.430903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.431026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.431052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.431211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.431238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.431377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.431403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.431522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.431547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.431672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.431697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.431838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.431863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.432000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.432025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.432171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.432197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.432343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.432369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.432486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.432512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.432636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.432662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.432820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.432845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.433003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.433029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.433158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.433185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.433337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.433363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.433500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.433525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.433683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.433708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.433831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.433856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.434076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.434101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.434232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.434258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.434388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.434414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.434565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.434591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.434719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.434744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.434884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.434910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.435059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.435089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.435220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.435245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.435401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.963 [2024-07-15 13:17:21.435426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.963 qpair failed and we were unable to recover it. 00:24:59.963 [2024-07-15 13:17:21.435558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.435583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.435733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.435758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.435924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.435950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.436131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.436156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.436277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.436302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.436431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.436457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.436589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.436614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.436744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.436769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.436926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.436952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.437079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.437104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.437280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.437305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.437450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.437476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.437627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.437652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.437777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.437803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.437960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.437985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.438119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.438144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.438303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.438328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.438459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.438484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.438600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.438626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.438741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.438767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.438924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.438949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.439069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.439094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.439214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.439239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.439393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.439418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.439553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.439579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.439711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.439737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.439890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.439916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.440037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.440062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.440220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.440246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.440403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.440428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.440562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.440588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.440725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.440750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.440906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.440931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.441082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.441107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.441230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.441255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.441381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.441406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.441556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.441581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.441756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.441802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.441946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.441974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.442110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.442136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.964 [2024-07-15 13:17:21.442270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.964 [2024-07-15 13:17:21.442295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.964 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.442429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.442454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.442604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.442629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.442757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.442782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.442953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.442980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.443132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.443158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.443283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.443308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.443435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.443460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.443631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.443656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.443811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.443850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.444029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.444062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.444208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.444235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.444363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.444389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.444536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.444562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.444712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.444738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.444857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.444889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.445034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.445073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.445211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.445237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.445371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.445396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.445515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.445540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.445664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.445834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.445874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.446042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.446069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.446218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.446244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.446399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.446426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.446555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.446581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.446722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.446748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.446902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.446928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.447050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.447076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.447200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.447226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.447382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.447408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.447535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.447560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.447682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.447707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.447859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.447906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.448084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.448122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.448259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.448285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.448419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.448445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.448574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.448599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.448755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.448780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.448907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.448934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.449083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.449108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.965 [2024-07-15 13:17:21.449270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.965 [2024-07-15 13:17:21.449295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.965 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.449444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.449469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.449593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.449618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.449745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.449772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.449920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.449959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.450107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.450146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.450295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.450322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.450446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.450472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.450631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.450656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.450786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.450813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.450967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.450994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.451129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.451154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.451282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.451306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.451436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.451462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.451593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.451621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.451748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.451773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.451914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.451942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.452093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.452119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.452249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.452275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.452404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.452429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.452577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.452603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.452759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.452784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.452943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.452969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.453132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.453162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.453287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.453313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.453489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.453515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.453636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.453661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.453790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.453815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.453940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.453967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.454101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.454126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.454250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.454275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.454430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.454455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.454610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.454635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.454752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.454776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.454903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.454929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.455077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.455102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.455251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.455276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.455403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.455429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.455575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.455600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.455723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.455748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.455863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.455894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.966 qpair failed and we were unable to recover it. 00:24:59.966 [2024-07-15 13:17:21.456047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.966 [2024-07-15 13:17:21.456072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.456208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.456232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.456358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.456383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.456532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.456557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.456702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.456727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.456844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.456869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.456998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.457023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.457141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.457166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.457314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.457339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.457471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.457500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.457636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.457661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.457790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.457816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.457972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.457997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.458120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.458145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.458266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.458291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.458425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.458450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.458607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.458632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.458748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.458772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.458924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.458949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.459090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.459115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.459264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.459289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.459410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.459435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.459580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.459605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.459738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.459763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.459896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.459921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.460072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.460097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.460222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.460246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.460397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.460421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.460551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.460576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.460710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.460735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.460872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.460922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.461052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.461079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.461208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.461233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.461357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.461384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.461514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.967 [2024-07-15 13:17:21.461539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.967 qpair failed and we were unable to recover it. 00:24:59.967 [2024-07-15 13:17:21.461654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.461679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.461821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.461847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.461989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.462014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.462142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.462167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.462290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.462315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.462441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.462466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.462604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.462629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.462792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.462831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.462995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.463022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.463145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.463170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.463292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.463317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.463456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.463481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.463607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.463632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.463785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.463811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.463968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.463994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.464131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.464156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.464280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.464305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.464453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.464479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.464597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.464621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.464745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.464770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.464893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.464918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.465046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.465071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.465194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.465219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.465352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.465376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.465526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.465552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.465677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.465702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.465820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.465845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.466002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.466028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.466169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.466208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.466344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.466372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.466493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.466519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.466666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.466691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.466814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.466839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.467024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.467063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.467184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.467210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.467356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.467381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.467501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.467526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.467652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.467676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.467794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.467818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.467968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.467994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.968 [2024-07-15 13:17:21.468109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.968 [2024-07-15 13:17:21.468134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.968 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.468280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.468305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.468436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.468461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.468593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.468617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.468780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.468804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.468955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.468980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.469116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.469141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.469262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.469286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.469444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.469469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.469590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.469614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.469734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.469759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.469881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.469907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.470032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.470057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.470177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.470202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.470335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.470359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.470496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.470521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.470674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.470699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.470818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.470843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.470999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.471025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.471144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.471169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.471290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.471315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.471447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.471472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.471590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.471615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.471763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.471788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.471969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.471995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.472143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.472168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.472289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.472314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.472471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.472497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.472652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.472678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.472836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.472862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.472990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.473016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.473146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.473171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.473290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.473315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.473429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.473454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.473578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.473602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.473733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.473758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.473935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.473960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.474075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.474100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.474219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.474244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.474362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.474387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.474540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.474564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.474694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.969 [2024-07-15 13:17:21.474718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.969 qpair failed and we were unable to recover it. 00:24:59.969 [2024-07-15 13:17:21.474864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.474909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.475076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.475103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.475234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.475261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.475399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.475425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.475581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.475607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.475759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.475785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.475942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.475970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.476091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.476116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.476239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.476264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.476428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.476453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.476586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.476611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.476757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.476781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.476935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.476975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.477128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.477154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.477285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.477311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.477444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.477471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.477600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.477625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.477780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.477806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.477957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.477984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.478120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.478146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.478300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.478325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.478505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.478532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.478685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.478710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.478832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.478857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.479023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.479048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.479209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.479234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.479360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.479385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.479505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.479529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.479669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.479694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.479822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.479847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.479999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.480025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.480144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.480169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.480302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.480327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.480443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.480467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.480595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.480620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.480733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.480758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.480929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.480968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.481110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.481136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.481296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.481323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.481444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.481471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.481631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.970 [2024-07-15 13:17:21.481656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.970 qpair failed and we were unable to recover it. 00:24:59.970 [2024-07-15 13:17:21.481779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.481805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.481963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.481990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.482113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.482138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.482289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.482314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.482445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.482470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.482615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.482640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.482762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.482786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.482917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.482942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.483090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.483115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.483241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.483265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.483428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.483456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.483580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.483605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.483736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.483762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.483925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.483952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.484105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.484130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.484254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.484279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.484406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.484431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.484558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.484583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.484744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.484769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.484952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.484978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.485108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.485133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.485253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.485278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.485418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.485443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.485597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.485622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.485802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.485828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.485969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.485997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.486150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.486175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.486300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.486325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.486447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.486472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.486606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.486631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.486755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.486780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.486958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.486983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.487102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.487128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.487260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.487286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.487444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.971 [2024-07-15 13:17:21.487469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.971 qpair failed and we were unable to recover it. 00:24:59.971 [2024-07-15 13:17:21.487601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.487626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.487755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.487780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.487902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.487928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.488078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.488103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.488223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.488248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.488398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.488427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.488540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.488565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.488693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.488718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.488860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.488905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.489063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.489090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.489241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.489266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.489386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.489412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.489540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.489565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.489701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.489727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.489884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.489911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.490035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.490060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.490177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.490202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.490327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.490352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.490499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.490524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.490645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.490671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.490835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.490874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.491026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.491053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.491188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.491213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.491369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.491395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.491540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.491566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.491705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.491731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.491891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.491918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.492044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.492069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.492191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.492216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.492331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.492356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.492485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.492509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.492638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.492663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.492812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.492841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.492981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.493006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.493128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.493154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.493308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.493333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.493450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.493475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.493629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.493655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.493786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.493824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.493960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.493987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.494119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.972 [2024-07-15 13:17:21.494147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.972 qpair failed and we were unable to recover it. 00:24:59.972 [2024-07-15 13:17:21.494272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.494298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.494428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.494453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.494597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.494623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.494737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.494763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.494932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.494957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.495097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.495122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.495260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.495286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.495406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.495431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.495591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.495616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.495747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.495772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.495924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.495950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.496083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.496108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.496257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.496282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.496416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.496443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.496570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.496595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.496724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.496749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.496882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.496917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.497071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.497097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.497217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.497246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.497376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.497401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.497529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.497554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.497677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.497702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.497844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.497869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.498044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.498069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.498190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.498215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.498341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.498366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.498521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.498546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.498666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.498691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.498824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.498849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.498986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.499011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.499166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.499191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.499315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.499340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.499511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.499536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.499659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.499684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.499800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.499825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.499974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.500000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.500121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.500146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.500281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.500306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.500473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.500498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.500621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.500646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.973 [2024-07-15 13:17:21.500795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.973 [2024-07-15 13:17:21.500819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.973 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.500949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.500974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.501125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.501150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.501270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.501295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.501439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.501463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.501583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.501609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.501733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.501758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.501887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.501913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.502062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.502087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.502208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.502233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.502361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.502386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.502532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.502557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.502685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.502710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.502836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.502861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.503029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.503055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.503214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.503239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.503366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.503391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.503510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.503535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.503652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.503677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.503806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.503831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.503951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.503977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.504132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.504157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.504287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.504312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.504428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.504453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.504584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.504608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.504721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.504746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.504915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.504954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.505118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.505144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.505276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.505302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.505428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.505454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.505630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.505655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.505776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.505801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.505954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.505981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.506110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.506135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.506279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.506304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.506426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.506451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.506570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.506595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.506722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.506748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.506861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.506891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.507037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.507062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.507215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.507240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.507363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.974 [2024-07-15 13:17:21.507388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.974 qpair failed and we were unable to recover it. 00:24:59.974 [2024-07-15 13:17:21.507553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.507579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.507709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.507734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.507897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.507923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.508072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.508098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.508239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.508264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.508446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.508471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.508623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.508648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.508771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.508796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.508961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.509000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.509147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.509175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.509314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.509340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.509470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.509496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.509648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.509673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.509831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.509857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.510019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.510045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.510171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.510196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.510318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.510342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.510465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.510490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.510610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.510635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.510764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.510920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.510948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.511096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.511121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.511246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.511272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.511422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.511448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.511565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.511590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.511741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.511766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.511897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.511924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.512076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.512101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.512220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.512245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.512407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.512432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.512569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.512594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.512745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.512770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.512898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.512924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.513058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.513083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.513213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.975 [2024-07-15 13:17:21.513238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.975 qpair failed and we were unable to recover it. 00:24:59.975 [2024-07-15 13:17:21.513364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.513388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.513518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.513544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.513690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.513715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.513831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.513856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.513981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.514007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.514147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.514173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.514294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.514319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.514449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.514474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.514624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.514648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.514780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.514805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.514934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.514959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.515085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.515112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.515262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.515287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.515411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.515436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.515572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.515598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.515746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.515771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.515933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.515959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.516084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.516108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.516238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.516263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.516396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.516421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.516542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.516568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.516699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.516724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.516882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.516907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.517059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.517088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.517235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.517260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.517409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.517433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.517565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.517590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.517741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.517766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.517898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.517923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.518053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.518078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.518206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.518231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.518354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.518379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.518522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.518547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.518678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.518703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.518822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.518847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.519032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.519057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.519184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.519210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.519331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.519356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.519497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.519522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.519674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.519699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.519850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.519881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.976 [2024-07-15 13:17:21.520003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.976 [2024-07-15 13:17:21.520028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.976 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.520148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.520173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.520301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.520326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.520507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.520532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.520663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.520688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.520820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.520845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.521006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.521032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.521154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.521180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.521315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.521340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.521469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.521498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.521629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.521654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.521820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.521858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.522007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.522035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.522191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.522217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.522345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.522371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.522526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.522553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.522678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.522704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.522858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.522893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.523031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.523057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.523188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.523214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.523339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.523366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.523516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.523542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.523698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.523723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.523907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.523946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.524101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.524127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.524260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.524285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.524406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.524431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.524565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.524590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.524709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.524734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.524852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.524885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.525039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.525065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.525215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.525240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.525360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.525385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.525534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.525559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.525679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.525704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.525859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.525905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.526044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.526078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.526238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.526264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.526397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.526423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.526543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.526568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.526688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.977 [2024-07-15 13:17:21.526713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.977 qpair failed and we were unable to recover it. 00:24:59.977 [2024-07-15 13:17:21.526871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.526905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.527052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.527077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.527213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.527238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.527359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.527384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.527531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.527556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.527675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.527700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.527827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.527852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.527998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.528023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.528145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.528170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.528302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.528327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.528480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.528505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.528629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.528654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.528780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.528805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.528986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.529012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.529144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.529169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.529292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.529317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.529432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.529457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.529592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.529617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.529747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.529772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.529903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.529929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.530047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.530073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.530221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.530246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.530365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.530395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.530517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.530542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.530657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.530683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.530831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.530857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.530977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.531002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.531121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.531146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.531273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.531298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.531417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.531442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.531559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.531584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.531742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.531767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.531895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.531921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.532048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.532074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.532193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.532218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.532366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.532392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.532510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.532536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.532691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.532716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.532839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.532864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.533003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.533028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.533146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.978 [2024-07-15 13:17:21.533171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.978 qpair failed and we were unable to recover it. 00:24:59.978 [2024-07-15 13:17:21.533302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.533327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.533469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.533494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.533610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.533635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.533775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.533800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.533932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.533958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.534078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.534102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.534261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.534286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.534407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.534432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.534579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.534608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.534735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.534760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.534893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.534932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.535062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.535089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.535220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.535246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.535371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.535397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.535574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.535600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.535721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.535747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.535884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.535910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.536072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.536097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.536244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.536269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.536404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.536429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.536559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.536583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.536710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.536736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.536869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.536906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.537046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.537072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.537223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.537249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.537381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.537407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.537534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.537560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.537697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.537723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.537849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.537874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.538009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.538033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.538180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.538205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.538357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.538382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.538507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.538532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.538648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.538673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.538802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.538829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.538954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.538984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.539135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.539160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.539290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.539315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.539457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.539482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.539609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.539634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.979 qpair failed and we were unable to recover it. 00:24:59.979 [2024-07-15 13:17:21.539763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.979 [2024-07-15 13:17:21.539789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.539943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.539969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.540128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.540153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.540275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.540301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.540429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.540454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.540573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.540598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.540750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.540775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.540894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.540919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.541056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.541082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.541205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.541231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.541382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.541407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.541529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.541555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.541675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.541700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.541851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.541881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.542009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.542034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.542183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.542222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.542386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.542413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.542561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.542587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.542708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.542734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.542900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.542926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.543046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.543072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.543204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.543230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.543382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.543413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.543561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.543586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.543710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.543737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.543869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.543900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.544021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.544045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.544172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.544197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.544323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.544348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.544496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.544521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.544652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.544678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.544808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.544835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.544998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.545024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.545167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.545193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.545320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.545346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.545468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.980 [2024-07-15 13:17:21.545494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.980 qpair failed and we were unable to recover it. 00:24:59.980 [2024-07-15 13:17:21.545618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.545643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.545762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.545787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.545907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.545933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.546090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.546117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.546267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.546292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.546427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.546452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.546576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.546602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.546783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.546809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.546941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.546968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.547121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.547148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.547276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.547302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.547438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.547463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.547589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.547616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.547747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.547776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.547914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.547939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.548103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.548128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.548244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.548269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.548419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.548444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.548598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.548626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.548780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.548806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.548969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.548996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.549121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.549147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.549312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.549337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.549461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.549487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.549627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.549654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.549810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.549835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.550030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.550056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.550192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.550217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.550374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.550399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.550517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.550542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.550664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.550690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.550810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.550835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.550990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.551017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.551153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.551179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.551344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.551369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.551493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.551518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.551651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.551677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.551805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.551830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.551967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.551994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.552120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.552145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.552297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.981 [2024-07-15 13:17:21.552322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.981 qpair failed and we were unable to recover it. 00:24:59.981 [2024-07-15 13:17:21.552451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.552476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.552607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.552632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.552779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.552804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.552936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.552961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.553117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.553142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.553264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.553289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.553411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.553436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.553565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.553591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.553741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.553766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.553883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.553908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.554032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.554057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.554177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.554202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.554326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.554351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.554502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.554527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.554643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.554668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.554816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.554841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.554995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.555035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.555159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.555186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.555343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.555369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.555487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.555513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.555650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.555675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.555809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.555837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.555964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.555991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.556172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.556198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.556330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.556355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.556472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.556497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.556618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.556647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.556798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.556823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.556954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.556980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.557120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.557145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.557278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.557303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.557419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.557444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.557564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.557589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.557721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.557746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.557887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.557912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.558028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.558054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.558176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.558201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.558329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.558355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.558487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.558512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.558628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.558653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.558802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.558842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.558979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.559006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.559128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.559154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.559309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.982 [2024-07-15 13:17:21.559334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.982 qpair failed and we were unable to recover it. 00:24:59.982 [2024-07-15 13:17:21.559488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.559513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.559671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.559696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.559827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.559852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.560037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.560063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.560223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.560249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.560398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.560423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.560548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.560573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.560723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.560749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.560903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.560930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.561066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.561105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.561271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.561297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.561435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.561462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.561604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.561630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.561787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.561812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.561951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.561989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.562147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.562173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.562313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.562339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.562465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.562490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.562611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.562637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.562786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.562811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.562947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.562974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.563129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.563155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.563290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.563317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.563480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.563506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.563625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.563651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.563778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.563804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.563948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.563975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.564096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.564121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.564244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.564269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.564391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.564416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.564532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.564557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.564701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.564726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.564850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.564881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.565014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.565040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.565163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.565188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.565309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.565334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.565489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.565515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.565643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.565668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.565781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.565806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.565960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.565999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.566157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.566184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.566309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.566335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.566482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.566509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.983 [2024-07-15 13:17:21.566633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.983 [2024-07-15 13:17:21.566660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.983 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.566784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.566809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.566979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.567006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.567129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.567153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.567280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.567304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.567464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.567489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.567639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.567668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.567821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.567846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.567980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.568008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.568157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.568184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.568314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.568341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.568502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.568529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.568715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.568741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.568866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.568900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.569026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.569053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.569186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.569213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.569355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.569381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.569499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.569525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.569678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.569704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.569864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.569896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.570045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.570071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.570194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.570220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.570371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.570396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.570526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.570552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.570683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.570709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.570837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.570862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.571114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.571152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.571274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.984 [2024-07-15 13:17:21.571301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.984 qpair failed and we were unable to recover it. 00:24:59.984 [2024-07-15 13:17:21.571430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.571455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.571584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.571612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.571739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.571764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.571902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.571928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.572105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.572130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.572279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.572310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.572429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.572455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.572580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.572609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.572736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.572762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.572892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.572918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.573056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.573082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.573229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.573254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.573387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.573414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.573535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.573561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.573695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.573720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.573855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.573902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.574040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.574067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.574225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.574254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.574407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.574433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.574602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.574627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.574761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.574788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.574948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.574973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.575104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.575130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.575281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.575307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.575427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.575452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.575587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.575612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.575731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.575756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.575910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.575935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.576071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.576098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.576247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.576286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.576413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.576440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.576557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.576583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.576733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.576760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.576900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.576926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.577080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.577106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.577232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.577259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.577393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.577419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.577564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.985 [2024-07-15 13:17:21.577589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.985 qpair failed and we were unable to recover it. 00:24:59.985 [2024-07-15 13:17:21.577710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.577736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.577922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.577948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.578072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.578098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.578262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.578287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.578415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.578441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.578570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.578597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.578757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.578783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.578947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.578979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.579131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.579159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.579277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.579302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.579435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.579460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.579579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.579604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.579754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.579779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.579902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.579928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.580062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.580087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.580242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.580267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.580417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.580442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.580575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.580602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.580724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.580750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.580894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.580921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.581051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.581077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.581239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.581265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.581433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.581459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.581586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.581613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.581742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.581768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.581949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.581975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.582112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.582138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.582292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.582318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.582448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.582473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.582621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.582647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.582772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.582798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.582932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.582958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.583108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.583134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.583257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.583283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.583424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.583449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.583581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.583606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.583731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.583756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.583894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.583921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.584074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.584099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.584224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.986 [2024-07-15 13:17:21.584249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.986 qpair failed and we were unable to recover it. 00:24:59.986 [2024-07-15 13:17:21.584398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.584423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.584584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.584612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.584765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.584790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.584927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.584953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.585104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.585130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.585248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.585273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.585427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.585452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.585577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.585608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.585789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.585828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.586010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.586048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.586181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.586208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.586359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.586384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.586519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.586544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.586670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.586697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.586833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.586860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.587022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.587048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.587181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.587207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.587334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.587361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.587480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.587506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.587664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.587690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.587814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.587840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.587999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.588026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.588150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.588175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.588306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.588331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.588454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.588480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.588603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.588629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.588756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.588783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.588936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.588963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.589126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.589164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.589292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.589319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.589446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.589472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.589596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.589621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.589749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.589777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.589900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.589927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.590083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.590113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.590285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.590311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.590440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.590466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.590624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.590649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.590779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.590806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.590932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.590959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.591093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.987 [2024-07-15 13:17:21.591121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.987 qpair failed and we were unable to recover it. 00:24:59.987 [2024-07-15 13:17:21.591250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.591276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.591410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.591435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.591583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.591609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.591729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.591755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.591902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.591941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.592104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.592131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.592250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.592276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.592404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.592430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.592584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.592611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.592759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.592797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.592960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.592987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.593115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.593141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.593287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.593313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.593441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.593468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.593620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.593646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.593802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.593830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.593972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.594010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.594153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.594181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.594315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.594340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.594476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.594501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.594680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.594718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.594849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.594882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.595005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.595030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.595186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.595213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.595363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.595389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.595524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.595550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.595670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.595695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.595833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.595873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.988 qpair failed and we were unable to recover it. 00:24:59.988 [2024-07-15 13:17:21.596009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.988 [2024-07-15 13:17:21.596036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.596200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.596228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.596354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.596379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.596506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.596531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.596663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.596688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.596843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.596875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.597029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.597055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.597185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.597211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.597358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.597384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.597507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.597533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.597691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.597717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.597864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.597898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.598059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.598086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.598203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.598229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.598355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.598382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.598534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.598560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.598685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.598712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.598864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.598898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.599027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.599054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.989 qpair failed and we were unable to recover it. 00:24:59.989 [2024-07-15 13:17:21.599208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.989 [2024-07-15 13:17:21.599234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.599363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.599388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.599518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.599543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.599708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.599746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.599889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.599917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.600047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.600072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.600230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.600256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.600404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.600429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.600585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.600611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.600752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.600779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.600923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.600971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.601104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.601131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.601283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.601309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.601436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.601468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.601615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.601642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.601767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.601793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.601931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.601969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.602112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.602138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.602261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.602287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.602420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.602445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.602576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.602602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.602747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.602772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.602901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.602926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.603079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.603105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.603232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.603257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.603402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.603428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.603547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.603573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.603728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.603754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.603892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.603921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.604056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.604082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.604207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.604233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.604413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.604438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.604578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.604603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.604769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.604808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.604945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.604972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.605127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.605153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.605279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.605305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.605453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.605478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.605598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.605623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.605756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.605795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.605961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.606005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.990 qpair failed and we were unable to recover it. 00:24:59.990 [2024-07-15 13:17:21.606147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.990 [2024-07-15 13:17:21.606176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.606300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.606326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.606491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.606517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.606651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.606677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.606803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.606829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.606988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.607027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.607164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.607192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.607374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.607399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.607529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.607555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.607717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.607743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.607915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.607954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.608101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.608139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.608300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.608326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.608469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.608494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.608626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.608652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.608803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.608828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.608964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.608990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.609125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.609150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.609270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.609295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.609422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.609447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.609566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.609592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.609721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.609746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.609874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.609905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.610040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.610066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.610187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.610212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.610344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.610370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.610522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.610547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.610677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.610702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.610836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.610861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.611022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.611048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.611167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.611192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.611320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.611345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.611523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.611548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.611664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.611690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.611811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.611836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.611988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.612028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.612186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.612212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.612349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.612374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.612533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.612558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.612685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.612710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.612864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.991 [2024-07-15 13:17:21.612895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.991 qpair failed and we were unable to recover it. 00:24:59.991 [2024-07-15 13:17:21.613061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.992 [2024-07-15 13:17:21.613086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.992 qpair failed and we were unable to recover it. 00:24:59.992 [2024-07-15 13:17:21.613237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.992 [2024-07-15 13:17:21.613263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.992 qpair failed and we were unable to recover it. 00:24:59.992 [2024-07-15 13:17:21.613393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.992 [2024-07-15 13:17:21.613421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:24:59.992 qpair failed and we were unable to recover it. 00:24:59.992 [2024-07-15 13:17:21.613560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.992 [2024-07-15 13:17:21.613586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.992 qpair failed and we were unable to recover it. 00:24:59.992 [2024-07-15 13:17:21.613719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.992 [2024-07-15 13:17:21.613744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:24:59.992 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.613860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.613892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.614047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.614074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.614195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.614221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.614372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.614398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.614515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.614540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.614684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.614710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.614851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.614897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.615070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.615109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.615247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.615276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.615429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.615456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.615609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.615634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.615784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.615809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.615941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.615968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.616118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.616144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.616265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.616291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.616417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.616443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.616582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.616609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.616737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.616763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.616907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.616934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.617063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.617089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.617208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.617241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.617389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.617414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.277 qpair failed and we were unable to recover it. 00:25:00.277 [2024-07-15 13:17:21.617541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.277 [2024-07-15 13:17:21.617566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.617707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.617732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.617867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.617902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.618028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.618054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.618179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.618205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.618357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.618383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.618509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.618536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.618660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.618686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.618815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.618842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.619003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.619028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.619160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.619185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.619337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.619522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.619547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.619680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.619706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.619833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.619860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.620033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.620072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.620234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.620261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.620391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.620417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.620575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.620601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.620752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.620778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.620934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.620961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.621087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.621113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.621232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.621257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.621433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.621458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.621579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.621604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.621753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.621782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.621942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.621970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.622107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.622132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.622284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.622309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.622427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.622452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.622582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.622607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.622747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.622771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.622903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.622929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.623052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.623078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.623213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.623238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.623358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.623383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.623542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.623567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.623692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.623717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.623868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.623900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.624027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.624052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.624190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.278 [2024-07-15 13:17:21.624216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.278 qpair failed and we were unable to recover it. 00:25:00.278 [2024-07-15 13:17:21.624368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.624393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.624547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.624572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.624691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.624716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.624844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.624869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.625007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.625033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.625161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.625186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.625309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.625335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.625513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.625538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.625654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.625679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.625816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.625855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.626005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.626043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.626170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.626202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.626363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.626388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.626529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.626556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.626675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.626701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.626850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.626884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.627049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.627075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.627200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.627226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.627357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.627383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.627542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.627568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.627722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.627749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.627898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.627926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.628061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.628087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.628223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.628248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.628402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.628427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.628556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.628581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.628735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.628760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.628883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.628909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.629062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.629087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.629242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.629268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.629385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.629410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.629562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.629587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.629713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.629740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.629858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.629889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.630018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.630044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.630171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.630197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.630349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.630374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.630524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.630549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.630681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.630710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.630839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.630864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.630996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.279 [2024-07-15 13:17:21.631022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.279 qpair failed and we were unable to recover it. 00:25:00.279 [2024-07-15 13:17:21.631156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.631180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.631309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.631334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.631498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.631523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.631637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.631662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.631804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.631843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.631996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.632035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.632164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.632190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.632358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.632384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.632568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.632594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.632719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.632745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.632906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.632932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.633055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.633080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.633202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.633227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.633357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.633383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.633516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.633541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.633656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.633681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.633814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.633839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.633994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.634019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.634151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.634176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.634302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.634326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.634481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.634507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.634644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.634670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.634823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.634848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.635007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.635033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.635153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.635182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.635336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.635361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.635476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.635501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.635628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.635654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.635808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.635833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.636008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.636047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.636202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.636229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.636367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.636394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.636519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.636546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.636668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.636694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.636818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.636843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.636988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.637015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.637145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.637172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.637324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.637350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.637480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.637505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.637621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.637645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.280 [2024-07-15 13:17:21.637793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.280 [2024-07-15 13:17:21.637819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.280 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.637945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.637970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.638085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.638111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.638243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.638268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.638420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.638445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.638579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.638605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.638734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.638759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.638886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.638912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.639027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.639052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.639167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.639192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.639316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.639341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.639477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.639505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.639635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.639660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.639795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.639834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.640007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.640036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.640201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.640227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.640358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.640385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.640539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.640564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.640728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.640767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.640902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.640929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.641053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.641078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.641232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.641257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.641396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.641421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.641547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.641573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.641706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.641731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.641857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.641888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.642012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.642037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.642167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.642195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.642346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.642372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.642519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.642544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.642668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.642694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.642853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.642899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.281 [2024-07-15 13:17:21.643032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.281 [2024-07-15 13:17:21.643059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.281 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.643187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.643213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.643331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.643356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.643479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.643505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.643640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.643664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.643798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.643823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.643981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.644011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.644132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.644157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.644277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.644301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.644416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.644441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.644553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.644579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.644699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.644725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.644885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.644911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.645059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.645085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.645205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.645230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.645379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.645404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.645556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.645581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.645697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.645722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.645851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.645880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.646029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.646054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.646184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.646209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.646350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.646375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.646505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.646530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.646650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.646676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.646809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.646834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.646998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.647023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.647155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.647180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.647301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.647326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.647440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.647466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.647583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.647608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.647732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.647757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.647895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.647921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.648051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.648077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.648199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.648227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.648380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.648405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.648527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.648552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.648675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.648700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.648817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.648842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.648989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.649028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.649171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.649198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.649319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.649344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.649492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.282 [2024-07-15 13:17:21.649518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.282 qpair failed and we were unable to recover it. 00:25:00.282 [2024-07-15 13:17:21.649650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.649675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.649823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.649849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.650013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.650040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.650170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.650195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.650326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.650351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.650476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.650501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.650623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.650649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.650791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.650816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.650959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.650998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.651125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.651151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.651310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.651336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.651459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.651484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.651635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.651661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.651810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.651835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.651968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.651994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.652126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.652152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.652266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.652291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.652411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.652436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.652565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.652597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.652717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.652742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.652883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.652923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.653060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.653086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.653236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.653262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.653411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.653437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.653561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.653588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.653716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.653743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.653893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.653920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.654044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.654070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.654221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.654247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.654376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.654403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.654533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.654558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.654688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.654713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.654839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.654864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.655001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.655026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.655152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.655177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.655297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.655322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.655449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.655474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.655603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.655628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.655748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.655772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.655897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.655922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.656044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.283 [2024-07-15 13:17:21.656069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.283 qpair failed and we were unable to recover it. 00:25:00.283 [2024-07-15 13:17:21.656195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.656219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.656337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.656362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.656488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.656513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.656632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.656657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.656776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.656805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.656953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.656992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.657135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.657173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.657304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.657331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.657515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.657541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.657703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.657729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.657851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.657887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.658049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.658074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.658231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.658256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.658380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.658406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.658588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.658615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.658765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.658791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.658948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.658974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.659095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.659120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.659242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.659267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.659403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.659428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.659548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.659574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.659697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.659723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.659883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.659911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.660032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.660058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.660224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.660262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.660393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.660419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.660547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.660572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.660714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.660739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.660869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.660904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.661029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.661054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.661179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.661204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.661371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.661410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.661582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.661609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.661757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.661783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.661911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.661938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.662071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.662097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.662255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.662282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.662430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.662456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.662573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.662599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.662728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.662754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.284 [2024-07-15 13:17:21.662909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.284 [2024-07-15 13:17:21.662937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.284 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.663060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.663085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.663233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.663259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.663385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.663411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.663530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.663560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.663705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.663730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.663858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.663895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.664038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.664063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.664212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.664237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.664363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.664389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.664519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.664545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.664682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.664708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.664858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.664888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.665026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.665052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.665179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.665204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.665340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.665365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.665514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.665540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.665666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.665692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.665817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.665842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.666022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.666060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.666189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.666215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.666346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.666373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.666510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.666536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.666658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.666683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.666800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.666825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.666950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.666976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.667100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.667125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.667249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.667274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.667429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.667454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.667596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.667621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.667736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.667761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.667909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.667940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.668091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.668129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.668288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.668316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.668435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.668461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.668627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.668653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.668773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.668799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.668933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.668960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.669092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.669119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.669278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.669305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.669427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.285 [2024-07-15 13:17:21.669453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.285 qpair failed and we were unable to recover it. 00:25:00.285 [2024-07-15 13:17:21.669623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.669649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.669771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.669796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.669926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.669951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.670106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.670130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.670281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.670306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.670445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.670470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.670588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.670613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.670742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.670767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.670885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.670911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.671029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.671054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.671200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.671225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.671345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.671370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.671487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.671512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.671645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.671670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.671812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.671851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.672002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.672041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.672185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.672211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.672334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.672368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.672512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.672538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.672656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.672681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.672839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.672865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.673011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.673036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.673154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.673180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.673325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.673350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.673504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.673529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.673682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.673709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.673837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.673862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.674008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.674033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.674195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.674219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.674334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.674359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.674523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.674548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.674685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.674710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.674841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.674866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.675029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.675054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.675181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.286 [2024-07-15 13:17:21.675206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.286 qpair failed and we were unable to recover it. 00:25:00.286 [2024-07-15 13:17:21.675356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.675381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.675532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.675556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.675708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.675734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.675860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.675892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.676010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.676034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.676188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.676215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.676335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.676360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.676513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.676538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.676654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.676679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.676801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.676830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.676982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.677022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.677160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.677186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.677317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.677342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.677473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.677498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.677631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.677656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.677782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.677807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.677955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.677981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.678099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.678124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.678239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.678264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.678390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.678415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.678539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.678564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.678722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.678747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.678888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.678927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.679063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.679089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.679231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.679256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.679400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.679426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.679573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.679599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.679720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.679745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.679870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.679901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.680027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.680052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.680169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.680194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.680328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.680353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.680470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.680495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.680650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.680675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.680798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.680823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.680976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.681002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.681118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.681147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.681286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.681311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.681454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.681479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.681637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.681661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.681781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.681806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.287 qpair failed and we were unable to recover it. 00:25:00.287 [2024-07-15 13:17:21.681963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.287 [2024-07-15 13:17:21.681988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.682104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.682130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.682279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.682303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.682448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.682473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.682606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.682631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.682787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.682812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.682939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.682965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.683086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.683112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.683237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.683262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.683390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.683415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.683535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.683560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.683684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.683709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.683842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.683867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.684017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.684056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.684233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.684259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.684390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.684417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.684541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.684567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.684718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.684744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.684864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.684897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.685029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.685055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.685191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.685215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.685351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.685377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.685528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.685557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.685735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.685760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.685892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.685917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.686044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.686069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.686230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.686255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.686378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.686404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.686539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.686565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.686691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.686716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.686837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.686861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.686998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.687024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.687148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.687173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.687293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.687317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.687463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.687488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.687613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.687638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.687795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.687820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.687942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.687967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.688099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.688123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.688242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.688267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.688432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.288 [2024-07-15 13:17:21.688457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.288 qpair failed and we were unable to recover it. 00:25:00.288 [2024-07-15 13:17:21.688588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.688614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.688735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.688760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.688897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.688923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.689053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.689078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.689199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.689224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.689353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.689378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.689500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.689524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.689640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.689665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.689783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.689812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.689935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.689960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.690117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.690142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.690276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.690302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.690429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.690453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.690578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.690603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.690739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.690763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.690920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.690944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.691092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.691117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.691242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.691266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.691405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.691429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.691555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.691581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.691722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.691746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.691888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.691927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.692099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.692126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.692259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.692286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.692406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.692432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.692587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.692612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.692779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.692818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.692981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.693007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.693152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.693177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.693337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.693361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.693494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.693520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.693678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.693704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.693855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.693885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.694023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.694048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.694167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.694192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.694313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.694341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.694471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.694496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.694625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.694649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.694805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.694830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.694954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.694980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.289 [2024-07-15 13:17:21.695115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.289 [2024-07-15 13:17:21.695139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.289 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.695290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.695315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.695475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.695500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.695633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.695657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.695779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.695805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.695935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.695960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.696117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.696141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.696255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.696280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.696433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.696458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.696590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.696614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.696744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.696769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.696916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.696955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.697095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.697121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.697274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.697301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.697422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.697447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.697575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.697602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.697755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.697781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.697960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.697987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.698138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.698164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.698291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.698316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.698471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.698505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.698683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.698712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.698850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.698892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.699051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.699077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.699229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.699254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.699407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.699432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.699571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.699598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.699749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.699775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.699898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.699923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.700065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.700090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.700211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.700236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.700376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.700401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.700530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.700555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.700683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.700707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.700841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.700866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.290 qpair failed and we were unable to recover it. 00:25:00.290 [2024-07-15 13:17:21.700990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.290 [2024-07-15 13:17:21.701015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.701152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.701177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.701330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.701355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.701507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.701531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.701674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.701699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.701823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.701850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.701979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.702004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.702126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.702151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.702298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.702322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.702470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.702496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.702678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.702703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.702828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.702852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.702977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.703002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.703134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.703159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.703309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.703337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.703470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.703495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.703618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.703643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.703808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.703847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.704017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.704044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.704204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.704230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.704384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.704409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.704575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.704600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.704718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.704743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.704872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.704913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.705046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.705071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.705227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.705251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.705404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.705430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.705550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.705574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.705704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.705728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.705854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.705887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.706009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.706035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.706183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.706209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.706338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.706363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.706483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.706508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.706633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.706658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.706787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.706811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.291 qpair failed and we were unable to recover it. 00:25:00.291 [2024-07-15 13:17:21.706932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.291 [2024-07-15 13:17:21.706958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.707086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.707111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.707264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.707289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.707404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.707429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.707553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.707577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.707699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.707730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.707884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.707911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.708064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.708089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.708216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.708242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.708367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.708392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.708514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.708540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.708656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.708681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.708818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.708858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.709035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.709062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.709232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.709258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.709412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.709440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.709616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.709642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.709778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.709803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.709944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.709971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.710107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.710133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.710270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.710307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.710434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.710460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.710584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.710609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.710737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.710763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.710898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.710923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.711060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.711086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.711205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.711230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.711405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.711429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.711544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.711570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.711697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.711723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.711856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.711890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.712043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.712069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.712200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.712227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.712356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.712381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.712514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.712538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.292 [2024-07-15 13:17:21.712651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.292 [2024-07-15 13:17:21.712677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.292 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.712832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.712856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.713034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.713073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.713209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.713236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.713359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.713384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.713536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.713562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.713716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.713743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.713872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.713911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.714066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.714092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.714219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.714245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.714362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.714388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.714518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.714545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.714668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.714694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.714821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.714847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.715011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.715038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.715167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.715194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.715331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.715357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.715514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.715541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.715699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.715725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.715847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.715873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.716031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.716058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.716196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.716223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.716355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.716381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.716533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.716560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.716696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.716722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.716840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.716865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.717026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.717064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.717217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.717243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.717366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.717393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.717516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.717541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.717677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.717702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.717822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.717847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.717999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.718024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.718148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.718173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.718324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.718350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.718469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.718495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.718623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.718648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.718801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.718831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.718959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.293 [2024-07-15 13:17:21.718985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.293 qpair failed and we were unable to recover it. 00:25:00.293 [2024-07-15 13:17:21.719136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.719162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.719302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.719326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.719451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.719476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.719626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.719651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.719771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.719796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.719910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.719937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.720057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.720082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.720202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.720227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.720347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.720372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.720492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.720517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.720652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.720677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.720813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.720851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.721006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.721044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.721234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.721261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.721383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.721409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.721539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.721565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.721714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.721740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.721865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.721899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.722019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.722044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.722176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.722201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.722326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.722351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.722479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.722504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.722622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.722647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.722792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.722831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.722982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.723022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.723156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.723189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.723311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.723336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.723459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.723485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.723638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.723663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.723801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.723829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.723979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.724019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.724156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.724184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.294 [2024-07-15 13:17:21.724329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.294 [2024-07-15 13:17:21.724355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.294 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.724479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.724504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.724683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.724709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.724839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.724867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.725036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.725063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.725211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.725236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.725368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.725394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.725536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.725561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.725710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.725735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.725891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.725931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.726096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.726123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.726252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.726278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.726428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.726454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.726603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.726629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.726760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.726787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.726921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.726948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.727081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.727108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.727236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.727261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.727390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.727416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.727568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.727593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.727724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.727753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.727912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.727938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.728059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.728084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.728240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.728265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.728416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.728441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.728580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.728605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.728724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.728749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.728867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.728899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.729031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.729056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.729183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.729207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.729355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.729380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.729492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.729518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.729645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.295 [2024-07-15 13:17:21.729671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.295 qpair failed and we were unable to recover it. 00:25:00.295 [2024-07-15 13:17:21.729824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.729850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.729989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.730028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.730196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.730223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.730351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.730377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.730505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.730532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.730681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.730706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.730859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.730890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.731020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.731046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.731173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.731198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.731351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.731377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.731514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.731540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.731701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.731726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.731845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.731871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.732030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.732057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.732177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.732207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.732357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.732383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.732543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.732570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.732699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.732725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.732891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.732918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.733074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.733099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.733224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.733250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.733368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.733394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.733546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.733571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.733693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.733719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.733871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.733904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.734027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.734053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.734190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.734215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.734370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.734395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.734517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.734542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.734682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.734721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.734862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.734898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.735050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.296 [2024-07-15 13:17:21.735076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.296 qpair failed and we were unable to recover it. 00:25:00.296 [2024-07-15 13:17:21.735203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.735228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.735364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.735391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.735511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.735536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.735671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.735697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.735824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.735849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.736000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.736026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.736181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.736207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.736354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.736379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.736509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.736534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.736700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.736726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.736844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.736869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.737002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.737028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.737183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.737209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.737336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.737361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.737544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.737570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.737708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.737734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.737856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.737887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.738011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.738036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.738198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.738224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.738345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.738370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.738507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.738533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.738661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.738686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.738842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.738873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.739006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.739033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.739164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.739189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.739319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.739345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.739461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.739487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.739642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.739668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.739819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.739845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.739989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.740027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.740161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.740187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.740348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.740374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.740491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.740516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.740693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.740717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.740849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.740883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.741007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.741033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.741173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.741197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.741328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.741353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.741534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.741559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.741681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.741705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.297 [2024-07-15 13:17:21.741888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.297 [2024-07-15 13:17:21.741914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.297 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.742033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.742058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.742186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.742211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.742342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.742367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.742499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.742527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.742657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.742682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.742808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.742834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.742966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.742993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.743126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.743152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.743293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.743331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.743490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.743517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.743673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.743699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.743850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.743882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.744013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.744039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.744171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.744197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.744342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.744368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.744504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.744529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.744658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.744685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.744834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.744860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.745030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.745068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.745208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.745235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.745386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.745411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.745553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.745578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.745709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.745735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.745862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.745892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.746020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.746045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.746165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.746189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.746321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.746345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.746477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.746502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.746660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.746685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.746811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.746836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.746969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.298 [2024-07-15 13:17:21.746994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.298 qpair failed and we were unable to recover it. 00:25:00.298 [2024-07-15 13:17:21.747117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.747142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.747270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.747295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.747417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.747442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.747577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.747602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.747782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.747820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.747963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.747991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.748135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.748162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.748286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.748311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.748463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.748488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.748650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.748676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.748797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.748823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.748958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.748982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.749136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.749161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.749288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.749313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.749464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.749489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.749621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.749645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.749763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.749790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.749955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.749994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.750138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.750165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.750313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.750339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.750504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.750530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.750652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.750678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.750862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.750897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.751065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.751090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.751244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.751269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.751427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.751451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.751585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.751610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.751733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.751757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.751882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.751907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.752056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.752080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.752200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.752224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.752372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.752412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.752563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.752602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.752794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.752821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.752960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.752987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.753143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.753168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.753300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.753325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.753485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.753510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.299 qpair failed and we were unable to recover it. 00:25:00.299 [2024-07-15 13:17:21.753630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.299 [2024-07-15 13:17:21.753655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.753782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.753807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.753977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.754004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.754148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.754177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.754334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.754360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.754521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.754546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.754695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.754726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.754886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.754914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.755066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.755092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.755244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.755270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.755395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.755421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.755580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.755606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.755729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.755754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.755889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.755916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.756048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.756074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.756226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.756251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.756375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.756400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.756548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.756573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.756697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.756722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.756883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.756909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.757044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.757070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.757218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.757243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.757389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.757414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.757563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.757588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.757720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.757745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.757887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.757914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.758041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.758068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.758247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.758273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.758410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.758435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.758579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.758605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.758730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.758755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.758890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.758916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.300 [2024-07-15 13:17:21.759047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.300 [2024-07-15 13:17:21.759073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.300 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.759237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.759276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.759409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.759435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.759602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.759626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.759747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.759773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.759904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.759931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.760082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.760107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.760260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.760285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.760405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.760429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.760560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.760584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.760714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.760739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.760862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.760900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.761040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.761065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.761201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.761226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.761349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.761374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.761529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.761554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.761687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.761715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.761852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.761884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.762018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.762043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.762182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.762207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.762347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.762372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.762525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.762550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.762670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.762695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.762860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.762892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.763021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.763047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.763210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.763235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.763366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.763392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.763525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.763552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.763714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.763741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.763865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.763897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.764023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.764048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.764178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.764204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.764328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.764352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.764500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.764524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.764655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.764680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.764833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.764857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.764989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.765015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.765141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.765166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.765320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.765344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.765471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.765496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.765656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.765684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.301 qpair failed and we were unable to recover it. 00:25:00.301 [2024-07-15 13:17:21.765807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.301 [2024-07-15 13:17:21.765832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.765990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.766028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.766160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.766187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.766340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.766366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.766533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.766559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.766709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.766734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.766873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.766905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.767033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.767059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.767212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.767237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.767371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.767396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.767529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.767556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.767675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.767700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.767855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.767887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.768050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.768076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.768209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.768237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.768361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.768386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.768536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.768560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.768696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.768721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.768847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.768872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.769009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.769034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.769154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.769180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.769318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.769342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.769475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.769500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.769623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.769647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.769802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.769827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.769948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.769973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.770102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.770129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.770292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.770323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.770474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.770500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.770634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.770661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.770790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.770816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.770940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.770968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.771086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.771111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.771266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.771291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.771417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.771443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.771571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.771597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.771726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.771751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.771885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.771910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.772035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.772060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.772187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.772212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.772327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.772352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.302 qpair failed and we were unable to recover it. 00:25:00.302 [2024-07-15 13:17:21.772518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.302 [2024-07-15 13:17:21.772544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.772670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.772694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.772816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.772841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.773006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.773031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.773157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.773181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.773322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.773347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.773481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.773506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.773634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.773659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.773789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.773813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.773930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.773956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.774089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.774114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.774259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.774284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.774432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.774457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.774626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.774655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.774783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.774808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.774937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.774961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.775106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.775130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.775277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.775301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.775429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.775454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.775606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.775630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.775747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.775771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.775923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.775948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.776106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.776130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.776247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.776273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.776400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.776425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.776573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.776597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.776723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.776750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.776890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.776916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.777043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.777067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.777195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.777220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.777337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.777361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.777496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.777521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.777651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.777675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.777826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.777850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.777975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.778001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.778125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.778149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.778294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.778320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.778446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.778472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.778589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.778613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.778776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.778801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.778937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.778962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.303 [2024-07-15 13:17:21.779117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.303 [2024-07-15 13:17:21.779141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.303 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.779278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.779303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.779426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.779450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.779610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.779635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.779764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.779789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.779920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.779946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.780060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.780085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.780234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.780258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.780384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.780409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.780559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.780584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.780728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.780752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.780916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.780943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.781060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.781085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.781261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.781300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.781435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.781462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.781587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.781613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.781736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.781761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.781912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.781938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.782069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.782095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.782220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.782246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.782401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.782427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.782549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.782574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.782709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.782736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.782858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.782891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.783072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.783096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.783253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.783278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.783432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.783458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.783588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.783614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.783734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.783759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.783883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.783908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.784049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.784074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.784196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.784222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.784369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.784393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.784510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.784535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.784655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.304 [2024-07-15 13:17:21.784680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.304 qpair failed and we were unable to recover it. 00:25:00.304 [2024-07-15 13:17:21.784857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.784889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.785022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.785048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.785181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.785209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.785345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.785372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.785500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.785527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.785679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.785705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.785832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.785857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.786000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.786026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.786155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.786180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.786344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.786369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.786495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.786521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.786640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.786665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.786787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.786811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.786950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.786976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.787127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.787154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.787310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.787337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.787469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.787496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.787628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.787653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.787788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.787814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.787959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.787985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.788139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.788165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.788323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.788349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.788515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.788541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.788699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.788725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.788861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.788892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.789029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.789053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.789176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.789201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.789325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.789349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.789474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.789499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.789618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.789642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.789766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.789791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.789912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.789937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.790093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.790117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.790270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.790294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.790467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.790491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.790616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.790641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.790797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.790822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.790952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.790977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.791105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.791130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.791253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.791278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.791460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.305 [2024-07-15 13:17:21.791485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.305 qpair failed and we were unable to recover it. 00:25:00.305 [2024-07-15 13:17:21.791607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.791632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.791778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.791803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.791932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.791958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.792109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.792133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.792314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.792339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.792461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.792487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.792636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.792661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.792783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.792807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.792962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.792988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.793103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.793128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.793256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.793282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.793440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.793465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.793597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.793622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.793781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.793806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.793930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.793955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.794076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.794100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.794255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.794280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.794426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.794450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.794578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.794603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.794748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.794774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.794924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.794949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.795077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.795101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.795251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.795276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.795404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.795429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.795553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.795578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.795730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.795754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.795903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.795928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.796067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.796092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.796225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.796250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.796397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.796422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.796555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.796579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.796699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.796724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.796882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.796907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.797054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.797079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.797231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.797256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.797400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.797424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.797587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.797611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.797764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.797789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.797919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.797944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.798082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.798107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.798246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.306 [2024-07-15 13:17:21.798271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.306 qpair failed and we were unable to recover it. 00:25:00.306 [2024-07-15 13:17:21.798394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.798418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.798551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.798576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.798712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.798737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.798889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.798914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.799041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.799070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.799225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.799250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.799388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.799413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.799538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.799563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.799691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.799715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.799833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.799858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.800024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.800049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.800203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.800228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.800354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.800379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.800495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.800520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.800661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.800685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.800814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.800838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.800996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.801021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.801203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.801229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.801370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.801395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.801548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.801572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.801702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.801727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.801907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.801933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.802064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.802089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.802240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.802264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.802397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.802422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.802555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.802579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.802715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.802739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.802864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.802895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.803023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.803048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.803174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.803199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.803322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.803346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.803503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.803531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.803649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.803674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.803794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.803818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.803955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.803980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.804103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.804128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.804276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.804301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.804418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.804442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.804591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.804616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.804740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.804765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.804885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.804911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.307 qpair failed and we were unable to recover it. 00:25:00.307 [2024-07-15 13:17:21.805061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.307 [2024-07-15 13:17:21.805086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.805234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.805258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.805378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.805403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.805559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.805583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.805751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.805775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.805905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.805931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.806093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.806118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.806273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.806297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.806420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.806445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.806562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.806586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.806719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.806743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.806895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.807051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.807075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.807192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.807216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.807393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.807418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.807544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.807569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.807690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.807717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.807857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.807888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.808035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.808059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.808178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.808203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.808327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.808351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.808499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.808524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.808656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.808681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.808813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.808837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.808984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.809011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.809132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.809156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.809304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.809328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.809497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.809523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.809650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.809674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.809788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.809812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.809948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.809974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.810121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.810160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.810317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.810343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.810482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.810508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.810660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.308 [2024-07-15 13:17:21.810687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.308 qpair failed and we were unable to recover it. 00:25:00.308 [2024-07-15 13:17:21.810805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.810831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.810975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.811003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.811132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.811159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.811298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.811323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.811490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.811516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.811633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.811659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.811812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.811837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.811961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.811985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.812141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.812166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.812306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.812334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.812491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.812515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.812652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.812678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.812792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.812816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.812940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.812965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.813097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.813122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.813255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.813279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.813402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.813427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.813554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.813578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.813737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.813762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.813892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.813919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.814061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.814085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.814206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.814231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.814388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.814413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.814565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.814589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.814714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.814739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.814863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.814893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.815014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.815038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.815175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.815200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.815326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.815350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.815484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.815509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.815648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.815672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.815829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.815855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.816004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.816044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.816183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.816209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.816370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.816396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.816548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.816574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.816698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.816723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.816864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.816898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.817058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.817085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.817215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.817241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.817369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.309 [2024-07-15 13:17:21.817394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.309 qpair failed and we were unable to recover it. 00:25:00.309 [2024-07-15 13:17:21.817540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.817565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.817696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.817721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.817851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.817881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.818037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.818062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.818232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.818258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.818385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.818410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.818564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.818589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.818717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.818742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.818905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.818930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.819080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.819119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.819290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.819317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.819464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.819490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.819607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.819632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.819799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.819824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.819967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.819994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.820173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.820198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.820329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.820356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.820477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.820502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.820626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.820653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.820789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.820813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.820977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.821002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.821121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.821147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.821296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.821325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.821448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.821472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.821588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.821615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.821733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.821759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.821907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.821935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.822058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.822084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.822210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.822235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.822385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.822410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.822540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.822566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.822691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.822715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.822861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.822893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.823012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.823038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.823199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.823223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.823378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.823403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.823535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.823560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.823687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.823712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.823841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.823866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.824052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.824078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.824207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.310 [2024-07-15 13:17:21.824232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.310 qpair failed and we were unable to recover it. 00:25:00.310 [2024-07-15 13:17:21.824386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.824410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.824561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.824586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.824743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.824768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.824931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.824956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.825075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.825100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.825251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.825276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.825429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.825455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.825573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.825598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.825742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.825767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.825921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.825946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.826096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.826121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.826297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.826322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.826442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.826467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.826591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.826617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.826761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.826785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.826935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.826960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.827084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.827108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.827269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.827293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.827439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.827464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.827586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.827610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.827738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.827762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.827913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.827939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.828075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.828114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.828266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.828293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.828438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.828464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.828644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.828669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.828795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.828820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.828955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.828983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.829110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.829135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.829285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.829310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.829429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.829454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.829583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.829609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.829743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.829767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.829926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.829951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.830081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.830106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.830231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.830259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.830406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.830431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.830602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.830628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.830752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.830778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.830972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.830998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.831121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.311 [2024-07-15 13:17:21.831146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.311 qpair failed and we were unable to recover it. 00:25:00.311 [2024-07-15 13:17:21.831270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.831296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.831436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.831461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.831615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.831642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.831774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.831799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.831958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.831984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.832139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.832164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.832289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.832314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.832451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.832475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.832639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.832666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.832794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.832820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.832963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.832989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.833144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.833170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.833318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.833343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.833465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.833490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.833618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.833644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.833797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.833822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.833957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.833982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.834133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.834158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.834276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.834300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.834427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.834453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.834606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.834633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.834763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.834795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.834962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.834989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.835137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.835162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.835327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.835352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.835476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.835501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.835638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.835664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.835786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.835810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.835941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.835966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.836092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.836117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.836241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.836266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.836425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.836450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.836597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.836622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.836749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.836774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.836904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.836929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.837084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.837109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.837252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.837277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.837401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.837425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.837577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.837602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.837746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.837771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.312 qpair failed and we were unable to recover it. 00:25:00.312 [2024-07-15 13:17:21.837931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.312 [2024-07-15 13:17:21.837956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.838088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.838115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.838262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.838286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.838443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.838468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.838622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.838647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.838780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.838805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.838962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.838988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.839110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.839134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.839294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.839322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.839455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.839480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.839631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.839655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.839806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.839830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.839975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.840001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.840153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.840177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.840295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.840320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.840447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.840472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.840601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.840625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.840777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.840802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.840934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.840960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.841090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.841115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.841245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.841270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.841403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.841427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.841597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.841623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.841778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.841803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.841941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.841966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.842096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.842121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.842241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.842266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.842397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.842422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.842550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.842575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.842694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.842718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.842841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.842866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.842986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.843011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.843145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.843170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.843347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.843371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.843496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.843519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.843640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.313 [2024-07-15 13:17:21.843664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.313 qpair failed and we were unable to recover it. 00:25:00.313 [2024-07-15 13:17:21.843818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.843843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.843976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.844001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.844128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.844154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.844334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.844360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.844481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.844506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.844628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.844653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.844781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.844805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.844950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.844976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.845132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.845157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.845288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.845313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.845439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.845463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.845639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.845664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.845793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.845817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.845968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.845993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.846134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.846159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.846294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.846319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.846435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.846460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.846574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.846598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.846716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.846740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.846893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.846919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.847042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.847067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.847248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.847274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.847425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.847449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.847600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.847625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.847780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.847805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.847923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.847948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.848074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.848099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.848258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.848283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.848428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.848452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.848566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.848590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.848714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.848739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.848898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.848923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.849051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.849076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.849208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.849233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.849381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.849405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.849526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.849551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.849679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.849704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.849844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.849869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.849996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.850021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.850179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.850205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.850356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.850384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.314 qpair failed and we were unable to recover it. 00:25:00.314 [2024-07-15 13:17:21.850523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.314 [2024-07-15 13:17:21.850547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.850679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.850705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.850883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.850908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.851033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.851057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.851209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.851235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.851388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.851412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.851575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.851599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.851748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.851774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.851952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.851977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.852115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.852139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.852282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.852307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.852432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.852456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.852607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.852631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.852756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.852781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.852905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.852930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.853061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.853086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.853209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.853234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.853363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.853388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.853517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.853541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.853671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.853695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.853821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.853846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.853984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.854008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.854166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.854190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.854310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.854335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.854472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.854496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.854628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.854653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.854834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.854863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.854997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.855021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.855156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.855180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.855357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.855381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.855499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.855523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.855678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.855703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.855859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.855889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.856026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.856051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.856179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.856203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.856330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.856354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.856479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.856504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.856633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.856658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.856804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.856828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.856959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.856984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.857146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.857171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.315 qpair failed and we were unable to recover it. 00:25:00.315 [2024-07-15 13:17:21.857313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.315 [2024-07-15 13:17:21.857338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.857461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.857486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.857645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.857670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.857792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.857816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.857973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.857999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.858125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.858150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.858272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.858296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.858416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.858440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.858617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.858642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.858762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.858787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.858934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.858959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.859124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.859150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.859310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.859334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.859487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.859512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.859647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.859672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.859792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.859816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.859961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.859987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.860142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.860167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.860313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.860337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.860466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.860491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.860621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.860646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.860795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.860819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.860975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.861001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.861129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.861154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.861282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.861307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.861437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.861461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.861618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.861642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.861793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.861818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.861946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.861971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.862091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.862115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.862256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.862281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.862431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.862455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.862588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.862612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.862787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.862812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.862929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.862954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.863109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.863134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.863287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.863314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.863476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.863502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.863639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.863663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.863795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.863820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.863961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.863986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.864143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.864167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.864301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.864325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.316 [2024-07-15 13:17:21.864445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.316 [2024-07-15 13:17:21.864469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.316 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.864599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.864624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.864737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.864762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.864918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.864944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.865075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.865101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.865252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.865276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.865397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.865422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.865546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.865571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.865695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.865719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.865836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.865861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.865996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.866026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.866187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.866213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.866340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.866365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.866524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.866548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.866673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.866698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.866844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.866868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.867005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.867030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.867190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.867216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.867344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.867368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.867493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.867519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.867670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.867695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.867816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.867840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.867980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.868005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.868127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.868152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.868293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.868317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.868441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.868466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.868591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.868616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.868740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.868765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.868928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.868955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.869119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.869143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.869276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.869301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.869421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.869445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.869620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.869644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.869795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.869820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.869943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.869968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.870092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.870117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.870299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.870325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.870480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.870508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.317 qpair failed and we were unable to recover it. 00:25:00.317 [2024-07-15 13:17:21.870628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.317 [2024-07-15 13:17:21.870653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.870832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.870857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.871028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.871053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.871177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.871201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.871328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.871353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.871486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.871510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.871627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.871651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.871804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.871830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.871971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.871996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.872117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.872143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.872292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.872317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.872458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.872482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.872615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.872640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.872806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.872831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.873020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.873045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.873165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.873190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.873308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.873332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.873448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.873472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.873604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.873628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.873742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.873766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.873895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.873921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.874041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.874065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.874213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.874238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.874360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.874385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.874548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.874572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.874702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.874727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.874852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.874885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.875007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.875204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.875230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.875384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.875409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.875557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.875582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.875734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.875759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.875912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.875937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.876053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.876078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.876204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.876230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.876387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.876411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.876529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.876553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.876706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.876731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.876860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.876901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.877022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.877046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.877202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.877241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.877375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.877402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.877536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.877561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.877719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.877746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.877873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.318 [2024-07-15 13:17:21.877906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.318 qpair failed and we were unable to recover it. 00:25:00.318 [2024-07-15 13:17:21.878034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.878060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.878185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.878210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.878370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.878396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.878522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.878547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.878700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.878726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.878873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.878903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.879028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.879052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.879175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.879201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.879328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.879356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.879511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.879535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.879660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.879688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.879821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.879847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.880001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.880027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.880152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.880178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.880311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.880337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.880465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.880490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.880640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.880667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.880820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.880845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.881017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.881043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.881175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.881200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.881350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.881374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.881525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.881550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.881672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.881697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.881827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.881852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.882037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.882062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.882184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.882208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.882358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.882382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.882504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.882529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.882705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.882730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.882888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.882913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.883033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.883058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.883178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.883202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.883355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.883380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.883531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.883555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.883677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.883702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.883842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.883884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.884014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.884039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.884189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.884214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.884359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.884384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.884501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.884525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.884641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.884789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.884814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.884948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.884973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.885114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.885139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-15 13:17:21.885291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-15 13:17:21.885315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.885460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.885484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.885619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.885644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.885798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.885822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.885974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.885999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.886137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.886175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.886318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.886346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.886490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.886516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.886664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.886690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.886847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.886872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.887013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.887038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.887202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.887228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.887351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.887376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.887499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.887524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.887676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.887701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.887824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.887849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.887979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.888005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.888140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.888165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.888333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.888364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.888518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.888544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.888706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.888733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.888870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.888901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.889052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.889076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.889201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.889226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.889346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.889370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.889488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.889512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.889662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.889689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.889849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.889881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.890033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.890059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.890210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.890237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.890362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.890387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.890533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.890558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.890720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.890747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.890905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.890932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.891066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.891090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.891223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.891247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.891368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.891393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.891519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.891543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.891688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.891715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.891875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.891908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.892040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.892066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.892219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.892244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.892401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.892427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.892543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.892569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.320 [2024-07-15 13:17:21.892713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.320 [2024-07-15 13:17:21.892738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.320 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.892873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.892911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.893071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.893096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.893228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.893253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.893392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.893417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.893566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.893592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.893720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.893747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.893905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.893931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.894109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.894133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.894254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.894279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.894438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.894463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.894594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.894619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.894739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.894764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.894891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.894916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.895070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.895095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.895250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.895276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.895400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.895424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.895540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.895565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.895718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.895742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.895911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.895936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.896063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.896089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.896341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.896368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.896496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.896521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.896672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.896699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.896844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.896869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.897009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.897035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.897170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.897196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.897331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.897356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.897493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.897525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.897681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.897706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.897839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.897865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.898000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.898024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.898172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.898198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.898321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.321 [2024-07-15 13:17:21.898345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.321 qpair failed and we were unable to recover it. 00:25:00.321 [2024-07-15 13:17:21.898469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.898494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.898640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.898664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.898786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.898811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.898943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.898969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.899103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.899127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.899297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.899322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.899442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.899467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.899590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.899614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.899737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.899762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.899887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.899912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.900029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.900053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.900179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.900203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.900388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.900413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.900570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.900596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.900726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.900750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.900897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.900922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.901057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.901083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.901242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.901268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.901393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.901417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.901574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.901600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.901724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.901747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.901910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.901940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.902063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.902088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.902207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.902375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.902399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.902573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.902599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.902720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.902744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.902861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.902893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.903056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.903081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.903201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.903225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.903380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.903405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.903527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.903552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.903700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.903725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.903887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.903913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.904044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.904069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.322 [2024-07-15 13:17:21.904189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.322 [2024-07-15 13:17:21.904214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.322 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.904339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.904363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.904484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.904509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.904659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.904683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.904810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.904835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.904965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.904991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.905142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.905167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.905296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.905321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.905432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.905456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.905583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.905607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.905731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.905756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.905914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.905938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.906064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.906088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.906219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.906249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.906375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.906399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.906529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.906554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.906708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.906733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.906892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.906918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.907070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.907095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.907216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.907241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.907382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.907408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.907563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.907588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.907767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.907791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.907917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.907942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.908062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.908088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.908242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.908266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.908395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.908420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.908547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.908572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.908695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.908719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.908869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.908899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.909055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.909080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.909203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.909228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.909344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.909369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.909502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.909526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.909701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.909726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.909883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.909908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.910031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.910055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.910174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.910200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.910316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.910341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.910476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.910502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.910660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.910686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.323 [2024-07-15 13:17:21.910809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.323 [2024-07-15 13:17:21.910834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.323 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.911005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.911031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.911155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.911179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.911304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.911328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.911474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.911499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.911621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.911645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.911800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.911824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.912004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.912043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.912180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.912206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.912356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.912382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.912506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.912533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.912664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.912689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.912803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.912828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.912973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.912999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.913125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.913149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.913295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.913320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.913442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.913467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.913590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.913614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.913794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.913820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.913989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.914016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.914135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.914160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.914339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.914364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.914504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.914532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.914661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.914688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.914809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.914835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.914962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.914989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.915148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.915182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.915313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.915339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.915492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.915518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.915699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.915725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.915895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.915922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.916047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.916072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.916190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.916215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.916337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.916363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.916485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.916512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.916683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.916708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.916830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.916856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.917014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.917041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.917167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.917192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.917314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.917338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.917482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.917508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.324 [2024-07-15 13:17:21.917665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.324 [2024-07-15 13:17:21.917691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.324 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.917806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.917831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.917958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.917984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.918130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.918155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.918308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.918333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.918474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.918499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.918623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.918648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.918768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.918793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.918956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.918982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.919115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.919139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.919259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.919284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.919413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.919437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.919571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.919600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.919745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.919770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.919923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.919948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.920075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.920100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.920236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.920262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.920406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.920430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.920563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.920588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.920743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.920768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.920917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.920957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.921098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.921126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.921277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.921303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.921434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.921467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.921588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.921613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.921773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.921801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.921942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.921969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.922113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.922138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.922260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.922286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.922447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.922472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.922632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.922657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.922805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.922830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.922978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.325 [2024-07-15 13:17:21.923006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.325 qpair failed and we were unable to recover it. 00:25:00.325 [2024-07-15 13:17:21.923164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.923201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.923359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.923385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.923531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.923556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.923688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.923715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.923838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.923864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.924001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.924028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.924184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.924214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.924336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.924361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.924514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.924539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.924669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.924695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.924827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.924856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.925002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.925030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.925165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.925191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.925348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.925374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.925554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.925580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.925741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.925767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.925911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.925938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.926117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.926144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.926308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.926334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.926486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.926522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.926678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.926704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.926855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.926899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.927031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.927057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.927247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.927273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.927456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.927482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.927606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.927632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.927759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.927785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.927979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.928006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.928161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.928198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.928328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.928354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.928508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.928533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.928659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.928685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.928822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.928847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.928994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.929020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.326 [2024-07-15 13:17:21.929178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.326 [2024-07-15 13:17:21.929203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.326 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.929353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.929390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.929510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.929536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.929679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.929705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.929871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.929903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.930032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.930058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.930194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.930219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.930349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.930374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.930525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.930551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.930700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.930724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.930855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.930893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.931050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.931076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.931210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.931235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.931370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.931397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.931532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.931557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.931682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.931707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.931870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.931902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.932027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.932052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.932183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.932209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.932358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.932383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.932504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.932529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.932671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.932698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.932886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.932913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.933041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.933066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.933243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.933269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.933422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.933447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.933576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.933601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.933720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.933744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.933875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.933906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.327 [2024-07-15 13:17:21.934035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.327 [2024-07-15 13:17:21.934060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.327 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.934180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.934205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.934323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.934348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.934506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.934533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.934675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.934701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.934828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.934853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.934988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.935014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.935171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.935208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.935365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.935389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.935538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.935569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.935686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.935711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.935860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.935900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.936037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.936062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.936186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.936211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.936338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.936362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.936507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.936532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.936684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.936709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.936843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.936867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.937079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.937105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.937235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.937261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.937444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.937470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.937597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.937622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.937756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.937782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.937961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.937986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.938119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.938148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.938270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.938296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.938419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.938444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.938594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.938623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.938754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.938779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.328 [2024-07-15 13:17:21.938914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.328 [2024-07-15 13:17:21.938939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.328 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.939094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.939120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.939277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.939302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.939453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.939478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.939633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.939658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.939789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.939814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.939957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.939981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.940102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.940127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.940247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.940271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.940413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.940438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.940568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.940592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.940705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.940729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.940856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.940902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.941027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.941051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.941225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.941251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.941377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.941402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.941551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.941575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.941707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.941733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.941889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.941915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.942040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.942064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.942186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.942212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.942397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.942422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.942540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.942569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.942750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.942775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.942898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.942923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.943056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.943081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.943199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.943224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.943358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.943384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.943558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.943584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.329 [2024-07-15 13:17:21.943728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.329 [2024-07-15 13:17:21.943753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.329 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.943881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.943907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.944036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.944062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.944260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.944286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.944449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.944473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.944622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.944646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.944775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.944801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.944936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.944961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.945084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.945109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.945236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.945261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.945408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.945434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.945583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.945609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.945735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.945759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.945886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.945911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.946043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.946068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.946213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.946237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.946368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.946394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.946545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.946571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.946749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.946773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.946904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.946930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.947075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.947101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.947259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.947284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.947427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.947453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.947583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.947609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.947772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.947797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.947931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.947957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.948090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.948116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.948254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.948279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.948424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.948458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.948606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.948631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.948778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.948802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.948960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.948987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.949121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.949147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.949300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.949324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.330 qpair failed and we were unable to recover it. 00:25:00.330 [2024-07-15 13:17:21.949483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.330 [2024-07-15 13:17:21.949508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.331 qpair failed and we were unable to recover it. 00:25:00.331 [2024-07-15 13:17:21.949641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.331 [2024-07-15 13:17:21.949667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.331 qpair failed and we were unable to recover it. 00:25:00.331 [2024-07-15 13:17:21.949798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.331 [2024-07-15 13:17:21.949830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.331 qpair failed and we were unable to recover it. 00:25:00.331 [2024-07-15 13:17:21.949955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.331 [2024-07-15 13:17:21.949982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.331 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-15 13:17:21.950106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-15 13:17:21.950132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-15 13:17:21.950251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-15 13:17:21.950276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-15 13:17:21.950410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-15 13:17:21.950435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-15 13:17:21.950565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-15 13:17:21.950591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-15 13:17:21.950744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-15 13:17:21.950769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-15 13:17:21.950899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-15 13:17:21.950925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-15 13:17:21.951057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-15 13:17:21.951083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.951232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.951257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.951381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.951406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.951537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.951562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.951695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.951721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.951856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.951886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.952040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.952066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.952214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.952239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.952400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.952425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.952553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.952578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.952706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.952731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.952855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.952898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.953062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.953088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.953251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.953276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.953421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.953447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.953621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.953646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.953810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.953836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.954012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.954042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.954181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.954206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.954339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.954363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.954495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.954520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.954659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.954685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.954823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.954847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.954981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.955007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.955129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.955154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.955311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.955336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.955478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.955505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.955653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.955678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.955842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.955866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.956036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.956061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.956217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.956243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.956400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.956424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-15 13:17:21.956548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-15 13:17:21.956572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.956716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.956742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.956870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.956906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.957064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.957090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.957210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.957235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.957375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.957399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.957531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.957556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.957712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.957738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.957871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.957902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.958034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.958059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.958208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.958233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.958385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.958410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.958537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.958567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.958722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.958747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.958900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.958926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.959051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.959077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.959226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.959252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.959382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.959407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.959539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.959564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.959705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.959731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.959892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.959919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.960045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.960228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.960253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.960370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.960395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.960509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.960534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.960672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.960697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.960834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.960860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.961016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.961165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.961191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.961343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.961368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.961532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.961558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.961712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-15 13:17:21.961747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-15 13:17:21.961870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.961907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.962022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.962047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.962168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.962203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.962331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.962356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.962479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.962504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.962667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.962692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.962815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.962840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.962976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.963006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.963166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.963192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.963321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.963346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.963492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.963517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.963655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.963680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.963823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.963849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.963978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.964004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.964156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.964181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.964314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.964339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.964459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.964484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.964649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.964675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.964802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.964827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.964975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.965001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.965124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.965149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.965326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.965355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.965481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.965507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.965675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.965700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.965835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.965860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.965985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.966011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.966194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.966224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.966345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.966372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.966513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.966538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-15 13:17:21.966664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-15 13:17:21.966689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.966835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.966861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.967040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.967066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.967213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.967238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.967369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.967395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.967544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.967569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.967698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.967724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.967886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.967912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.968044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.968069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.968193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.968218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.968340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.968367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.968518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.968544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.968683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.968708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.968890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.968916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.969033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.969059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.969200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.969226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.969344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.969369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.969529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.969566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.969685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.969710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.969831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.969857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.969992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.970018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.970148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.970173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.970306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.970332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.970481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.970694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.970720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.970854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.970884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.971018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.971044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.971198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.971223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.971353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.971535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-15 13:17:21.971561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-15 13:17:21.971737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.971763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.971917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.971943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.972068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.972094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.972220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.972246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.972394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.972420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.972543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.972575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.972727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.972752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.972889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.972915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.973030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.973055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.973196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.973221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.973353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.973379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.973530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.973556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.973683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.973708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.973863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.973906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.974022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.974048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.974190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.974215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.974374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.974405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.974526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.974552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.974681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.974706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.974828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.974854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.975007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.975034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.975186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.975212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.975367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.975393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.975528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.975554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.975683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.975708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.975834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.975859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.976026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.976051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.976177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.976202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.976333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.976358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.976500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.976525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-15 13:17:21.976693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-15 13:17:21.976719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.976859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.976891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.977015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.977041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.977160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.977198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.977329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.977354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.977508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.977533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.977677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.977702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.977821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.977846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.977985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.978011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.978150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.978176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.978341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.978366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.978486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.978511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.978659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.978684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.978819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.978848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.978989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.979014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.979162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.979197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.979345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.979371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.979502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.979527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.979652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.979677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.979797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.979823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.979983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.980009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.980145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.980171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.980306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.980331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-15 13:17:21.980457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-15 13:17:21.980483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.980610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.980636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.980786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.980811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.980955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.980982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.981109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.981134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.981258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.981283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.981430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.981455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.981577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.981603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.981762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.981787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.981917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.981943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.982064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.982089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.982239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.982264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.982382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.982407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.982568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.982593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.982741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.982767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.982917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.982943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.983069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.983094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.983208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.983234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.983372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.983397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.983550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.983575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.983708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.983734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.983868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.983905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.984027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.984053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.984180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.984205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.984319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.984344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.984497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.984532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.984696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.984721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.984885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.984912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.985063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.985089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.985216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.985247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.985387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-15 13:17:21.985412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-15 13:17:21.985559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.985594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.985728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.985757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.985917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.985945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.986109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.986135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.986271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.986297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.986459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.986485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.986646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.986671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.986825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.986852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.987024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.987051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.987174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.987206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.987371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.987397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.987524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.987551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.987674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.987700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.987866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.987907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.988080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.988107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.988255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.988280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.988399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.988424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.988607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.988633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.988757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.988783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.988945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.988971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.989090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.989116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.989266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.989291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.989450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.989477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.989612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.989649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.989807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.989834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.989976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.990002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.990151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.990177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.990332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.990358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.990493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.990518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.990655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.990681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.990831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.990856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.991010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-15 13:17:21.991036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-15 13:17:21.991188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.991214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.991342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.991368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.991506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.991532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.991687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.991713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.991886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.991913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.992042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.992068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.992191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.992216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.992342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.992369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.992511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.992550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.992684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.992711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.992893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.992921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.993072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.993097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.993261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.993288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.993412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.993438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.993571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.993597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.993755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.993780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.993915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.993942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.994057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.994082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.994218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.994244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.994383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.994408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.994528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.994553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.994688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.994719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.994840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.994865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.995008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.995034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.995152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.995177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.995308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.995334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.995455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.995482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.995618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.995643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.995771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.995796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.995954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.995980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.996132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.996158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-15 13:17:21.996327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-15 13:17:21.996353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.996488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.996515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.996669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.996694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.996844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.996869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.997053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.997078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.997201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.997229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.997385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.997411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.997591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.997616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.997734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.997759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.997909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.997936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.998059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.998085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.998220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.998246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.998365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.998402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.998562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.998588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.998713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.998739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.998887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.998914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.999034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.999060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.999201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.999240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.999376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.999403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.999560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.999587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.999741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.999766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:21.999910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:21.999936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.000092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.000117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.000270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.000296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.000414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.000439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.000587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.000613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.000776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.000802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.000936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.000961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.001085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.001110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.001256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.001281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.001403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.001429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.001557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-15 13:17:22.001583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-15 13:17:22.001702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.001727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.001848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.001873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.002069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.002096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.002225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.002251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.002447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.002472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.002608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.002633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.002766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.002792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.002919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.002945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.003069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.003095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.003220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.003245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.003406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.003432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.003556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.003582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.003751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.003777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.003908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.003934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.004096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.004124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.004258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.004284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.004435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.004460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.004644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.004670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.004789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.004814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.004951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.004978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.005129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.005154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.005280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.005305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.005457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.005483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.005608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.005641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.005795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.005822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.005953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.005984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.006107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.006134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.006262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.006292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.006460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.006486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-15 13:17:22.006617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-15 13:17:22.006643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.006797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.006823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.006963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.006990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.007123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.007149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.007304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.007330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.007454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.007481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.007635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.007660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.007836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.007888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.008051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.008078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.008205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.008231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.008395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.008421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.008552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.008578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.008812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.008837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.008997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.009024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.009147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.009173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.009346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.009373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.009568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.009594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.009725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.009751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.009902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.009929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.010052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.010078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.010206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.010232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.010358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.010384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.010518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.010543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.010672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.010698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.010838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-15 13:17:22.010864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-15 13:17:22.011056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.011082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.011203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.011229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.011362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.011388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.011543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.011568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.011695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.011721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.011842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.011886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.012041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.012067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.012226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.012251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.012371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.012397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.012526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.012553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.012719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.012758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.012928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.012963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.013124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.013150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.013334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.013359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.013502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.013528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.013658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.013683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.013805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.013831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.013966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.013992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.014143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.014169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.014284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.014310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.014457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.014493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.014620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.014645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.014763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.014789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.014917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.014944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.015073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.015099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.015247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.015289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.015455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.015481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.015633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.015659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.015779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.015805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.015940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.015967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.016120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.016146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.016273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.016299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.016430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.016457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.016583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.016609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-15 13:17:22.016763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-15 13:17:22.016789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.016924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.016951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.017129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.017155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.017270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.017296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.017419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.017446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.017579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.017606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.017762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.017788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.017940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.017976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.018108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.018134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.018257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.018285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.018456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.018481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.018633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.018658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.018781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.018806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.018937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.018963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.019089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.019116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.019244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.019271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.019415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.019440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.019574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.019600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.019742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.019770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.019918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.019946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.020102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.020128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.020258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.020284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.020471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.020497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.020627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.020652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.020815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.020841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.020979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.021005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.021182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.021207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.021363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.021389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.021510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-15 13:17:22.021535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-15 13:17:22.021698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.021724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.021887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.021913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.022074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.022101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.022279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.022305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.022443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.022469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.022588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.022614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.022737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.022763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.022935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.022962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.023119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.023144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.023285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.023311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.023432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.023457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.023580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.023606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.023735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.023761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.023904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.023930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.024059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.024086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.024203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.024233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.024383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.024409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.024532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.024558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.024718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.024743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.024869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.024902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.025023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.025049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.025197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.025222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.025344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.025370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.025493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.025520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.025655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.025681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.025814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.025840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.025983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.026011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.026134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.026160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.026289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.026316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-15 13:17:22.026471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-15 13:17:22.026496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.026618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.026643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.026774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.026802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.026962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.026988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.027109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.027135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.027309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.027335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.027459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.027485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.027603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.027628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.027754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.027779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.027902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.027927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.028046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.028072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.028201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.028226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.028344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.028370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.028492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.028521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.028668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.028693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.028817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.028844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.028975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.029002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.029144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.029184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.029312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.029338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.029503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.029529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.029660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.029686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.029837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.029862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.030008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.030035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.030189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.030223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.030353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.030379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.030521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.030547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.030672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.030699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.030827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.030854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.031035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.031074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.031319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.031345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-15 13:17:22.031501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-15 13:17:22.031527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.031682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.031708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.031844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.031871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.032013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.032039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.032174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.032209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.032369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.032395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.032567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.032592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.032758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.032786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.032940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.032979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.033108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.033134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.033285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.033463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.033489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.033642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.033667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.033790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.033816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.033947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.033973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.034090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.034116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.034270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.034296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.034419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.034444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.034581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.034607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.034740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.034765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.034896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.034923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.035057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.035083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.035198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.035224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.035377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.035402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.035555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.035581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.035697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-15 13:17:22.035722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-15 13:17:22.035882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.035908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.036059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.036084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.036200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.036225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.036372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.036397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.036525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.036550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.036676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.036701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.036835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.036861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.036987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.037012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.037190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.037215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.037345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.037371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.037487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.037512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.037661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.037690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.037835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.037887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.038035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.038074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.038204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.038231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.038366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.038393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.038523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.038549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.038674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.038700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.038829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.038856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.039030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.039069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.039204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.039232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.039385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.039411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.039565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.039591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.039747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.039774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.039902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.039929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.040062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.040087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.040225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.040250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.040400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.040426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.040569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.040594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.040718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.040745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.040897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.040923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-15 13:17:22.041050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-15 13:17:22.041076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.041195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.041221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.041342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.041368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.041518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.041543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.041673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.041699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.041815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.041840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.042017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.042056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.042177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.042217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.042351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.042378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.042502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.042528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.042686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.042713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.042830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.042856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.042994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.043021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.043155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.043194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.043335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.043361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.043490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.043516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.043641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.043668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.043827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.043853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.043994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.044020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.044161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.044187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.044340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.044366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.044611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.044638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.044790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.044816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.044963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.044990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.045113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.045139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.045293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.045318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.045458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.045483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.045614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.045641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.045775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.045801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.045957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.045984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.046108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-15 13:17:22.046133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-15 13:17:22.046281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.046306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.046455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.046481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.046604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.046629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.046786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.046812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.046952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.046978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.047115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.047140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.047263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.047288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.047439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.047464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.047603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.047628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.047795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.047833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.047979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.048006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.048145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.048171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.048333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.048358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.048484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.048509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.048636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.048661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.048815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.048841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.048982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.049025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.049182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.049209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.049339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.049365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.049522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.049548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.049685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.049710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.049847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.049872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.050050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.050077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.050212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.050238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.050389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.050415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.050534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.050559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.050701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.050727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.050893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.050919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.051050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.051078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.051203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.051229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.051367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.051393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.051551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.051578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-15 13:17:22.051710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-15 13:17:22.051735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.051865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.051900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.052021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.052046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.052167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.052200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.052352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.052378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.052501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.052525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.052650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.052675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.052806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.052831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.052993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.053022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.053150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.053176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.053301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.053326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.053480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.053511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.053638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.053664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.053787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.053814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.053968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.053995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.054138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.054163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.054302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.054328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.054478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.054503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.054635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.054660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.054814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.054839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.054998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.055026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.055156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.055182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.055297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.055324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.055450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.055477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.055636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.055662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.055825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.055852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.056018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.056046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.056181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.056206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.056358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.056384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.056510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.056534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.056654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-15 13:17:22.056679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-15 13:17:22.056809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.056834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.056973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.057000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.057131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.057157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.057298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.057323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.057478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.057504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.057632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.057658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.057812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.057837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e8c000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.057972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.058000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.058134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.058160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.058311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.058336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.058460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.058485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.058602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.058628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.058778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.058803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.058958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.058984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.059117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.059144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.059296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.059321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.059454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.059479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.059606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.059631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.059753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.059778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.059914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.059939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.060100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.060125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.060257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.060283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.060414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.060440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.060592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.060617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.627 [2024-07-15 13:17:22.060746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.060773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:00.627 [2024-07-15 13:17:22.060933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.060960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.627 [2024-07-15 13:17:22.061087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.627 [2024-07-15 13:17:22.061114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.627 [2024-07-15 13:17:22.061247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.061273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.061394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.061420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-15 13:17:22.061561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-15 13:17:22.061587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.061709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.061734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.061886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.061912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.062040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.062069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.062208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.062233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.062352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.062377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.062510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.062536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.062662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.062688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.062811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.062836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.063007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.063042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.063212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.063240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.063478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.063505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.063657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.063683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.063845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.063871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.064014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.064040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.064191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.064224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.064353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.064379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.064508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.064535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.064672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.064699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.064825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.064850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.065083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.065110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.065245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.065273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.065396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.065422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.065548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.065573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.065694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.065720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-15 13:17:22.065885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-15 13:17:22.065912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.066066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.066092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.066208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.066233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.066362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.066387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.066517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.066542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.066695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.066726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.066883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.066909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.067030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.067057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.067182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.067208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.067341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.067366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.067488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.067513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.067680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.067709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.067832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.067858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.068026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.068052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.068188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.068214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.068393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.068419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.068573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.068599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.068727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.068753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.068919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.068946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.069073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.069098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.069249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.069274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.069401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.069427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.069585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.069611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.069736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.069762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.069896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.069921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.070048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.070074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.070232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.070258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.070388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.070414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.070539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.070566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.070689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.070714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.070843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.070868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.071001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.071027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.071163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-15 13:17:22.071192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-15 13:17:22.071339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.071365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.071480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.071505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.071632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.071658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.071776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.071801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.071957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.071983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.072106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.072131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.072246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.072271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.072389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.072414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.072537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.072563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.072733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.072758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.072886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.072913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.073043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.073072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.073205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.073230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.073366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.073392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.073514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.073540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.073694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.073720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.073882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.073908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.074040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.074065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.074229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.074254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.074371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.074396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.074533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.074558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.074705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.074730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.074853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.074895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.075059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.075084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.075256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.075281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.075413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.075438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.075594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.075632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.075789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.075815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.075945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.075970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.076134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.076160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.076295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.076320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-15 13:17:22.076450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-15 13:17:22.076475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.076596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.076621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.076768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.076794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.076953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.076978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.077098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.077123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.077246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.077271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.077394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.077419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.077581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.077620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.077774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.077801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.077940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.077967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.078087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.631 [2024-07-15 13:17:22.078114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.631 [2024-07-15 13:17:22.078248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.078278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.631 [2024-07-15 13:17:22.078440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.078468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.078597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.078624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.078749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.078774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.078894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.078920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.079051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.079077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.079201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.079226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.079351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.079376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.079538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.079564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.079688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.079715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.079849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.079874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.080007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.080032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.080182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.080207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.080337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.080362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.080477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.080502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.080620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.080646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.080764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.080789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.080957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.080986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.081136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.081162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-15 13:17:22.081289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-15 13:17:22.081315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.081447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.081473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.081597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.081623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.081772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.081797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.081932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.081959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.082114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.082140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.082272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.082297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.082423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.082448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.082591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.082616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.082729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.082754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.082889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.082915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.083048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.083073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.083209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.083234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.083353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.083378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.083500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.083525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.083687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.083712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.083887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.083913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.084044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.084069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.084199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.084225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.084374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.084400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.084518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.084544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.084669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.084693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.084818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.084843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.085084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.085109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.085264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.085289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.085452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.085478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.085604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.085630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.085759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.085785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.085915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.085941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.086062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.086087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.086273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.086298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-15 13:17:22.086427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-15 13:17:22.086453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.086589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.086614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.086737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.086762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.086902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.086928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.087047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.087072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.087195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.087220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.087349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.087374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.087498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.087524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.087651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.087676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.087798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.087822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.087957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.087982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.088172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.088197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.088344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.088368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.088499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.088524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.088663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.088688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.088834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.088888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.089054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.089081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.089258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.089284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.089415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.089442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.089589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.089615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.089739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.089764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.089891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.089918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.090076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.090101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.090254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.090280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.090428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.090454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.090581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.090606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.090731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.090756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.090889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.090916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.091044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.091070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.091226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.091251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.091377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.091403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.091522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.091548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-15 13:17:22.091682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-15 13:17:22.091709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.091865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.091913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.092061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.092086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.092241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.092266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.092387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.092413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.092594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.092619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.092747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.092772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.092895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.092921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.093036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.093062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.093184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.093210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.093343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.093369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.093496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.093522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.093698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.093723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.093848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.093873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.094004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.094029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.094154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.094179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.094314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.094339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.094461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.094486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.094614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.094639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.094762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.094788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.094928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.094954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.095073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.095099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.095272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.095312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.095442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.095469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.095598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.095626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.095762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.095789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.095942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.095969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.096104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-15 13:17:22.096129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-15 13:17:22.096309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.096335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.096518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.096544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.096681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.096707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.096865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.096898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.097052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.097077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.097212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.097237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.097356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.097381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.097503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.097528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.097667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.097693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.097817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.097843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.097986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.098012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.098164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.098189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.098351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.098376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.098490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.098515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.098641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.098667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.098793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.098819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.098982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.099008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.099128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.099153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.099282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.099307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.099428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.099453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.099577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.099602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.099738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.099778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.099947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.099976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.100136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.100161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.100320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.100346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.100467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.100493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.100647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.100673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.100798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.100824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.100987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.101013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.101136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.101161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-15 13:17:22.101309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-15 13:17:22.101335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.101491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.101516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.101672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.101696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.101840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.101885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.102023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.102049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.102195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.102221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.102376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.102402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.102530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.102557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.102678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.102704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.102822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.102848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.103010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.103051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.103192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.103218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.103350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.103376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.103510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.103537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.103673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.103699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 Malloc0 00:25:00.636 [2024-07-15 13:17:22.103863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.103912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.104057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.104083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.636 [2024-07-15 13:17:22.104234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.104260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.104397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.636 [2024-07-15 13:17:22.104422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.636 [2024-07-15 13:17:22.104560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.104586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.636 [2024-07-15 13:17:22.104735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.104760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.104912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.104938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.105072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.105098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.105242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.105277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.105431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.105456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.105617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.105642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.105771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.105797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.105939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.105965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.106103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.106128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.106264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.106289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-15 13:17:22.106448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-15 13:17:22.106473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.106629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.106654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.106788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.106813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.106976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.107001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.107136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.107161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.107295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.107320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.107443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.107468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.107532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.637 [2024-07-15 13:17:22.107622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.107647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.107766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.107790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.107926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.107951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.108077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.108102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.108268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.108293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.108412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.108437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.108608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.108640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.108797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.108825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.109032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.109059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.109179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.109205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.109334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.109360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.109536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.109562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.109728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.109755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.109896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.109922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.110051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.110077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.110228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.110254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.110380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.110405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.110526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.110550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.110708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.110733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.110864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.110897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.111045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.111070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.111191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.111215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.111335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.111360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.111482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.111507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.111657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.111681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.637 [2024-07-15 13:17:22.111812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.637 [2024-07-15 13:17:22.111837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.637 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.111977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.112002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.112158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.112187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.112355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.112382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.112511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.112538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.112677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.112704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.112857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.112894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.113031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.113056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.113222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.113248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.113403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.113428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.113551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.113576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.113723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.113749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.113868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.113898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.114025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.114049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.114180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.114208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.114327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.114353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.114470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.114495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.114617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.114643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.114799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.114824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.114994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.115020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e84000b90 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.115153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.115179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.115337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.115366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.115498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.115523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.115651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.115676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.638 [2024-07-15 13:17:22.115820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.115845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.638 [2024-07-15 13:17:22.115972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.115997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.638 [2024-07-15 13:17:22.116126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.116151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.638 [2024-07-15 13:17:22.116278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.116304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.116422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.116447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.116591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.116616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.116745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.116770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.116899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.638 [2024-07-15 13:17:22.116925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.638 qpair failed and we were unable to recover it. 00:25:00.638 [2024-07-15 13:17:22.117049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.117074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.117208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.117237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.117359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.117384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.117511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.117536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.117678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.117703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.117820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.117845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.117984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.118010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.118139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.118165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.118301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.118326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.118507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.118532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.118667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.118693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.118813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.118839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.118983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.119009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.119156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.119191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.119313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.119340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.119470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.119496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.119616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.119642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.119771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.119796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.119942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.119968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.120109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.120134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.120294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.120319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.120447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.120472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.120605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.120630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.120781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.120806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.639 qpair failed and we were unable to recover it. 00:25:00.639 [2024-07-15 13:17:22.120957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.639 [2024-07-15 13:17:22.121004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.121154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.121180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.121329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.121354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.121481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.121507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.121636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.121666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.121804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.121829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.121985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.122011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.122134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.122160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.122290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.122315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.122495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.122520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.122644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.122669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.122795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.122821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.122948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.122974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.123108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.123134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.123263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.123288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.123451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.123476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.123601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.123626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.123753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.123779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.640 [2024-07-15 13:17:22.123974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.123999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.640 [2024-07-15 13:17:22.124124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.124149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.640 [2024-07-15 13:17:22.124278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.124304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.124429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.124455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.124592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.124617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.124736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.124761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.124930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.124957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.125078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.125103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.125237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.125262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.125414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.125439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.640 qpair failed and we were unable to recover it. 00:25:00.640 [2024-07-15 13:17:22.125594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.640 [2024-07-15 13:17:22.125619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.125745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.125771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.125906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.125932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.126059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.126084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.126218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.126244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.126400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.126425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.126556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.126581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.126701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.126726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.126880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.126905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.127055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.127081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.127213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.127238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.127360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.127385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.127517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.127542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.127697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.127722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.127851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.127882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.128016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.128045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.128168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.128193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.128317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.128342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.128497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.128522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.128659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.128684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.128840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.128865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.129008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.129034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.129185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.129210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.129395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.129420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.129568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.129593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.129718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.129743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.129902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.129928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.130081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.130106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.130251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.130276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.130417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.130442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.130575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.130600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.130753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.641 [2024-07-15 13:17:22.130778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.641 qpair failed and we were unable to recover it. 00:25:00.641 [2024-07-15 13:17:22.130935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.130960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.131080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.131105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.131236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.131260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.131392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.131418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.131569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.131594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.131744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.131768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.642 [2024-07-15 13:17:22.131897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.131924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.642 [2024-07-15 13:17:22.132066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.132092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.642 [2024-07-15 13:17:22.132244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.132270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.132405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.132431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.132552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.132577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.132700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.132725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.132855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.132896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.133017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.133042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.133166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.133192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.133317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.133342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.133462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.133487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.133653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.133693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.133837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.133863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.134008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.134034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.134272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.134297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.134419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.134444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.134603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.134628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e94000b90 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.134792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.134819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.134960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.134986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.135112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.135138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.135261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.135287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.135414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.135439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.135558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.642 [2024-07-15 13:17:22.135583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f2200 with addr=10.0.0.2, port=4420 00:25:00.642 qpair failed and we were unable to recover it. 00:25:00.642 [2024-07-15 13:17:22.135759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.642 [2024-07-15 13:17:22.138305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.138455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.138483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.138498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.138511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.138545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.643 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.643 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.643 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.643 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.643 13:17:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3937273 00:25:00.643 [2024-07-15 13:17:22.148158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.148283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.148315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.148330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.148343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.148371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.158135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.158281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.158308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.158322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.158336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.158364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.168166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.168337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.168363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.168377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.168390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.168418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.178150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.178281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.178307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.178321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.178333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.178362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.188139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.188269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.188295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.188309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.188330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.188359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.198194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.198318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.198343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.198357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.198370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.198397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.208191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.208322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.208347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.208361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.208374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.208402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.218221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.218357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.218382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.218397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.218409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.218437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.228236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.643 [2024-07-15 13:17:22.228363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.643 [2024-07-15 13:17:22.228389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.643 [2024-07-15 13:17:22.228403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.643 [2024-07-15 13:17:22.228416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.643 [2024-07-15 13:17:22.228443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.643 qpair failed and we were unable to recover it. 00:25:00.643 [2024-07-15 13:17:22.238314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.644 [2024-07-15 13:17:22.238443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.644 [2024-07-15 13:17:22.238468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.644 [2024-07-15 13:17:22.238482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.644 [2024-07-15 13:17:22.238495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.644 [2024-07-15 13:17:22.238523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.644 qpair failed and we were unable to recover it. 00:25:00.644 [2024-07-15 13:17:22.248303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.644 [2024-07-15 13:17:22.248445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.644 [2024-07-15 13:17:22.248470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.644 [2024-07-15 13:17:22.248484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.644 [2024-07-15 13:17:22.248497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.644 [2024-07-15 13:17:22.248524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.644 qpair failed and we were unable to recover it. 00:25:00.644 [2024-07-15 13:17:22.258376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.644 [2024-07-15 13:17:22.258504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.644 [2024-07-15 13:17:22.258529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.644 [2024-07-15 13:17:22.258543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.644 [2024-07-15 13:17:22.258556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.644 [2024-07-15 13:17:22.258584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.644 qpair failed and we were unable to recover it. 00:25:00.644 [2024-07-15 13:17:22.268373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.644 [2024-07-15 13:17:22.268496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.644 [2024-07-15 13:17:22.268521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.644 [2024-07-15 13:17:22.268536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.644 [2024-07-15 13:17:22.268549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.644 [2024-07-15 13:17:22.268576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.644 qpair failed and we were unable to recover it. 00:25:00.644 [2024-07-15 13:17:22.278424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.644 [2024-07-15 13:17:22.278597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.644 [2024-07-15 13:17:22.278623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.644 [2024-07-15 13:17:22.278637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.644 [2024-07-15 13:17:22.278655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.644 [2024-07-15 13:17:22.278683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.644 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.288437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.288571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.288597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.288612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.288626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.288654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.298470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.298613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.298638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.298652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.298665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.298693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.308497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.308632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.308657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.308671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.308684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.308711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.318763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.318903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.318928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.318943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.318955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.318983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.328570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.328707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.328733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.328747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.328760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.328788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.338709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.338834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.338859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.338874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.338893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.338921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.348663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.348807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.348835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.348850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.348862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.348903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.358651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.358781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.905 [2024-07-15 13:17:22.358807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.905 [2024-07-15 13:17:22.358822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.905 [2024-07-15 13:17:22.358835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.905 [2024-07-15 13:17:22.358863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.905 qpair failed and we were unable to recover it. 00:25:00.905 [2024-07-15 13:17:22.368692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.905 [2024-07-15 13:17:22.368849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.368874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.368899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.368918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.368947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.378713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.378847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.378873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.378899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.378912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.378940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.388739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.388870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.388903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.388918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.388931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.388959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.398728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.398852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.398883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.398901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.398914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.398942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.408762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.408919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.408947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.408962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.408978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.409008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.418811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.418950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.418977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.418991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.419004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.419032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.428850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.428991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.429016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.429031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.429043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.429072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.438901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.439031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.439056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.439070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.439083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.439111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.448864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.449009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.449035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.449049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.449062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.449089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.458888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.459049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.459075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.459097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.459110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.459139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.468909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.469046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.469071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.906 [2024-07-15 13:17:22.469085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.906 [2024-07-15 13:17:22.469097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.906 [2024-07-15 13:17:22.469125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.906 qpair failed and we were unable to recover it. 00:25:00.906 [2024-07-15 13:17:22.478934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.906 [2024-07-15 13:17:22.479061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.906 [2024-07-15 13:17:22.479087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.479101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.479113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.479140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.488984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.489126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.489152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.489166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.489179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.489206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.499087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.499217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.499243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.499257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.499270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.499297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.509036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.509162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.509188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.509202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.509215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.509243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.519046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.519180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.519206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.519220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.519232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.519260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.529089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.529220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.529244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.529258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.529271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.529298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.539142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.539273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.539298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.539312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.539324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.539352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.549137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.549268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.549292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.549313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.549325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.549354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.559157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.559286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.559311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.559325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.559338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.559365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.569238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.569373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.569397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.569412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.569425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.569452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.579264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.907 [2024-07-15 13:17:22.579441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.907 [2024-07-15 13:17:22.579466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.907 [2024-07-15 13:17:22.579480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.907 [2024-07-15 13:17:22.579492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.907 [2024-07-15 13:17:22.579520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.907 qpair failed and we were unable to recover it. 00:25:00.907 [2024-07-15 13:17:22.589242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.908 [2024-07-15 13:17:22.589366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.908 [2024-07-15 13:17:22.589391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.908 [2024-07-15 13:17:22.589405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.908 [2024-07-15 13:17:22.589417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.908 [2024-07-15 13:17:22.589445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.908 qpair failed and we were unable to recover it. 00:25:00.908 [2024-07-15 13:17:22.599270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.908 [2024-07-15 13:17:22.599397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.908 [2024-07-15 13:17:22.599422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.908 [2024-07-15 13:17:22.599436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.908 [2024-07-15 13:17:22.599450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:00.908 [2024-07-15 13:17:22.599477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.908 qpair failed and we were unable to recover it. 00:25:01.170 [2024-07-15 13:17:22.609363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.170 [2024-07-15 13:17:22.609540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.170 [2024-07-15 13:17:22.609565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.170 [2024-07-15 13:17:22.609579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.170 [2024-07-15 13:17:22.609592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.170 [2024-07-15 13:17:22.609619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.170 qpair failed and we were unable to recover it. 00:25:01.170 [2024-07-15 13:17:22.619333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.170 [2024-07-15 13:17:22.619459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.170 [2024-07-15 13:17:22.619484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.170 [2024-07-15 13:17:22.619497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.170 [2024-07-15 13:17:22.619511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.170 [2024-07-15 13:17:22.619538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.170 qpair failed and we were unable to recover it. 00:25:01.170 [2024-07-15 13:17:22.629369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.170 [2024-07-15 13:17:22.629493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.170 [2024-07-15 13:17:22.629518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.170 [2024-07-15 13:17:22.629532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.170 [2024-07-15 13:17:22.629545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.170 [2024-07-15 13:17:22.629572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.170 qpair failed and we were unable to recover it. 00:25:01.170 [2024-07-15 13:17:22.639427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.170 [2024-07-15 13:17:22.639554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.170 [2024-07-15 13:17:22.639580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.170 [2024-07-15 13:17:22.639605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.170 [2024-07-15 13:17:22.639619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.170 [2024-07-15 13:17:22.639649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.170 qpair failed and we were unable to recover it. 00:25:01.170 [2024-07-15 13:17:22.649439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.170 [2024-07-15 13:17:22.649574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.170 [2024-07-15 13:17:22.649599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.170 [2024-07-15 13:17:22.649613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.170 [2024-07-15 13:17:22.649626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.170 [2024-07-15 13:17:22.649654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.659557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.659700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.659726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.659741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.659753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.659780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.669485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.669606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.669631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.669646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.669659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.669686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.679521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.679650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.679677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.679696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.679710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.679739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.689548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.689679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.689704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.689718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.689731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.689759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.699589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.699713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.699738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.699752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.699766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.699793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.709602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.709773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.709798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.709812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.709825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.709852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.719649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.719811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.719836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.719850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.719863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.719896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.729666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.729801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.729831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.729846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.729859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.729893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.739678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.739804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.739829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.739843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.739856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.739889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.749741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.749894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.749921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.749936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.749949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.749979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.759734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.759858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.759890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.759906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.759919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.759947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.769793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.769946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.769971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.769985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.769999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.770033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.779800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.779931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.779957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.779972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.779984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.780012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.789830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.789981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-07-15 13:17:22.790010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-07-15 13:17:22.790025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-07-15 13:17:22.790038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.171 [2024-07-15 13:17:22.790067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-07-15 13:17:22.799857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-07-15 13:17:22.799997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-07-15 13:17:22.800023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-07-15 13:17:22.800038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-07-15 13:17:22.800051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.172 [2024-07-15 13:17:22.800079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-07-15 13:17:22.809906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-07-15 13:17:22.810079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-07-15 13:17:22.810104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-07-15 13:17:22.810118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-07-15 13:17:22.810131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.172 [2024-07-15 13:17:22.810159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-07-15 13:17:22.819926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-07-15 13:17:22.820069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-07-15 13:17:22.820100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-07-15 13:17:22.820115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-07-15 13:17:22.820128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.172 [2024-07-15 13:17:22.820155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-07-15 13:17:22.829973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-07-15 13:17:22.830106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-07-15 13:17:22.830133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-07-15 13:17:22.830152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-07-15 13:17:22.830165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.172 [2024-07-15 13:17:22.830194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-07-15 13:17:22.839967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-07-15 13:17:22.840088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-07-15 13:17:22.840114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-07-15 13:17:22.840129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-07-15 13:17:22.840142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.172 [2024-07-15 13:17:22.840169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-07-15 13:17:22.850051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-07-15 13:17:22.850208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-07-15 13:17:22.850232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-07-15 13:17:22.850246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-07-15 13:17:22.850260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.172 [2024-07-15 13:17:22.850288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-07-15 13:17:22.860043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-07-15 13:17:22.860222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-07-15 13:17:22.860247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-07-15 13:17:22.860261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-07-15 13:17:22.860274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.172 [2024-07-15 13:17:22.860308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.432 [2024-07-15 13:17:22.870068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.432 [2024-07-15 13:17:22.870241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.432 [2024-07-15 13:17:22.870267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.432 [2024-07-15 13:17:22.870281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.432 [2024-07-15 13:17:22.870294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.432 [2024-07-15 13:17:22.870323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.432 qpair failed and we were unable to recover it. 00:25:01.432 [2024-07-15 13:17:22.880072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.432 [2024-07-15 13:17:22.880230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.432 [2024-07-15 13:17:22.880256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.432 [2024-07-15 13:17:22.880270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.432 [2024-07-15 13:17:22.880283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.432 [2024-07-15 13:17:22.880312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.432 qpair failed and we were unable to recover it. 00:25:01.432 [2024-07-15 13:17:22.890164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.432 [2024-07-15 13:17:22.890299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.432 [2024-07-15 13:17:22.890324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.432 [2024-07-15 13:17:22.890339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.432 [2024-07-15 13:17:22.890352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.432 [2024-07-15 13:17:22.890379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.900142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.900270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.900295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.900310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.900322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.900350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.910151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.910275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.910306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.910321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.910333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.910361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.920196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.920334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.920359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.920374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.920387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.920414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.930209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.930339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.930364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.930378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.930391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.930419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.940243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.940367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.940392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.940407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.940420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.940447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.950270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.950392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.950417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.950431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.950444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.950481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.960320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.960445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.960471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.960486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.960498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.960526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.970426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.970568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.970593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.970608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.970621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.970648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.980343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.980471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.980497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.980511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.980524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.980552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:22.990427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:22.990574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:22.990600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:22.990614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:22.990627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:22.990655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:23.000420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:23.000552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:23.000582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:23.000597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:23.000610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:23.000638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:23.010486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:23.010623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:23.010649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:23.010663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:23.010676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:23.010703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:23.020554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:23.020674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:23.020700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:23.020714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:23.020727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:23.020755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:23.030494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:23.030639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.433 [2024-07-15 13:17:23.030665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.433 [2024-07-15 13:17:23.030680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.433 [2024-07-15 13:17:23.030692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.433 [2024-07-15 13:17:23.030720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.433 qpair failed and we were unable to recover it. 00:25:01.433 [2024-07-15 13:17:23.040612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.433 [2024-07-15 13:17:23.040742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.040767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.040782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.040800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.040828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.050592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.050765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.050790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.050804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.050817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.050845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.060579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.060750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.060777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.060791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.060803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.060830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.070601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.070724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.070749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.070764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.070776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.070804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.080621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.080748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.080774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.080788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.080801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.080828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.090657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.090823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.090849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.090863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.090885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.090915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.100727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.100857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.100891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.100907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.100920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.100948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.110713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.110836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.110860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.110873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.110894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.110922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.434 [2024-07-15 13:17:23.120744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.434 [2024-07-15 13:17:23.120871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.434 [2024-07-15 13:17:23.120903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.434 [2024-07-15 13:17:23.120917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.434 [2024-07-15 13:17:23.120930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.434 [2024-07-15 13:17:23.120957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.434 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.130909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.131049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.131074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.131088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.131107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.131135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.140788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.140923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.140949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.140963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.140976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.141004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.150851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.150988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.151013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.151028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.151041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.151068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.160887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.161029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.161055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.161069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.161081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.161109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.170895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.171036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.171061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.171075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.171088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.171116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.180933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.181074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.181101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.181121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.181135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.181164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.190958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.191095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.191120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.191135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.191148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.191176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.200956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.201078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.201103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.201117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.201131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.201158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.211009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.211150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.211174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.211188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.211201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.211229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.221034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.221159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.221184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.221204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.221218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.221246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.231092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.231268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.231293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.231308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.231321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.231348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.241101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.241282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.241307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.241321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.241334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.703 [2024-07-15 13:17:23.241361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.703 qpair failed and we were unable to recover it. 00:25:01.703 [2024-07-15 13:17:23.251138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.703 [2024-07-15 13:17:23.251308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.703 [2024-07-15 13:17:23.251334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.703 [2024-07-15 13:17:23.251348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.703 [2024-07-15 13:17:23.251361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.251388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.261219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.261344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.261370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.261384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.261397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.261425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.271183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.271315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.271340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.271354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.271368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.271395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.281190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.281314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.281339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.281353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.281366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.281393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.291268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.291435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.291460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.291474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.291487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.291515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.301256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.301381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.301407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.301421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.301434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.301462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.311323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.311468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.311493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.311513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.311526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.311554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.321446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.321572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.321598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.321612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.321625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.321653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.331383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.331527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.331552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.331566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.331579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.331607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.341429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.341567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.341592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.341606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.341619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.341646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.351402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.351560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.351585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.351600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.351612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.351640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.361426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.361560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.361585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.361599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.361612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.361640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.371480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.371609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.371634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.371648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.371661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.371688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.381484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.381607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.381632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.381646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.381659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.381687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.704 [2024-07-15 13:17:23.391542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.704 [2024-07-15 13:17:23.391681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.704 [2024-07-15 13:17:23.391706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.704 [2024-07-15 13:17:23.391721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.704 [2024-07-15 13:17:23.391733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.704 [2024-07-15 13:17:23.391760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.704 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.401593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.401721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.401747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.401767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.401781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.401809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.411622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.411800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.411825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.411839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.411852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.411888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.421590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.421732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.421758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.421772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.421785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.421812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.431625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.431750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.431774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.431789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.431802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.431829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.441681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.441810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.441836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.441850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.441862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.441899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.451693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.451834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.451860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.451874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.451896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.451923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.461711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.461845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.461870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.461893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.461906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.461935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.471749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.471884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-07-15 13:17:23.471910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-07-15 13:17:23.471924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-07-15 13:17:23.471937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.963 [2024-07-15 13:17:23.471965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 13:17:23.481799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-07-15 13:17:23.481946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.481970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.481985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.481998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.482025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.491827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.492009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.492038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.492053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.492067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.492094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.501846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.502011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.502038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.502053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.502066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.502095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.511845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.511975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.512001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.512015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.512028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.512056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.521892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.522067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.522092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.522106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.522119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.522147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.531948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.532097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.532121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.532135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.532149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.532182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.541945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.542074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.542098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.542112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.542125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.542153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.551955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.552078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.552103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.552117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.552130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.552157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.562031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.562203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.562228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.562242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.562255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.562283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.572105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.572237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.572261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.572275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.572288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.572315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.582114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.582249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.582280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.582295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.582308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.582335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.592090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.592222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.592247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.592262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.592275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.592302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.602110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.602242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.602268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.602282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.602295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.602322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.612150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.612283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.612308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.612322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.612335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.612362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 13:17:23.622166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.964 [2024-07-15 13:17:23.622294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.964 [2024-07-15 13:17:23.622319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.964 [2024-07-15 13:17:23.622333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.964 [2024-07-15 13:17:23.622347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.964 [2024-07-15 13:17:23.622380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 13:17:23.632233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.965 [2024-07-15 13:17:23.632372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.965 [2024-07-15 13:17:23.632397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.965 [2024-07-15 13:17:23.632411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.965 [2024-07-15 13:17:23.632423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.965 [2024-07-15 13:17:23.632451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 13:17:23.642239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.965 [2024-07-15 13:17:23.642368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.965 [2024-07-15 13:17:23.642394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.965 [2024-07-15 13:17:23.642408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.965 [2024-07-15 13:17:23.642421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.965 [2024-07-15 13:17:23.642449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 13:17:23.652274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.965 [2024-07-15 13:17:23.652405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.965 [2024-07-15 13:17:23.652429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.965 [2024-07-15 13:17:23.652444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.965 [2024-07-15 13:17:23.652456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:01.965 [2024-07-15 13:17:23.652484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.965 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.662348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.662490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.223 [2024-07-15 13:17:23.662515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.223 [2024-07-15 13:17:23.662530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.223 [2024-07-15 13:17:23.662543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.223 [2024-07-15 13:17:23.662570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.223 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.672306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.672435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.223 [2024-07-15 13:17:23.672465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.223 [2024-07-15 13:17:23.672480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.223 [2024-07-15 13:17:23.672493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.223 [2024-07-15 13:17:23.672523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.223 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.682333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.682464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.223 [2024-07-15 13:17:23.682490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.223 [2024-07-15 13:17:23.682505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.223 [2024-07-15 13:17:23.682518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.223 [2024-07-15 13:17:23.682545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.223 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.692374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.692510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.223 [2024-07-15 13:17:23.692535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.223 [2024-07-15 13:17:23.692549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.223 [2024-07-15 13:17:23.692562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.223 [2024-07-15 13:17:23.692590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.223 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.702381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.702511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.223 [2024-07-15 13:17:23.702536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.223 [2024-07-15 13:17:23.702551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.223 [2024-07-15 13:17:23.702564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.223 [2024-07-15 13:17:23.702591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.223 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.712416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.712542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.223 [2024-07-15 13:17:23.712566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.223 [2024-07-15 13:17:23.712581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.223 [2024-07-15 13:17:23.712593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.223 [2024-07-15 13:17:23.712629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.223 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.722517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.722684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.223 [2024-07-15 13:17:23.722710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.223 [2024-07-15 13:17:23.722724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.223 [2024-07-15 13:17:23.722737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.223 [2024-07-15 13:17:23.722765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.223 qpair failed and we were unable to recover it. 00:25:02.223 [2024-07-15 13:17:23.732530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.223 [2024-07-15 13:17:23.732657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.732682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.732696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.732709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.732737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.742508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.742640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.742665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.742679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.742692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.742720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.752546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.752672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.752698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.752712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.752724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.752751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.762551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.762670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.762701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.762716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.762728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.762756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.772641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.772801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.772826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.772841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.772853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.772890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.782629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.782760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.782785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.782800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.782813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.782840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.792677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.792818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.792843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.792857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.792870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.792906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.802719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.802852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.802884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.802900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.802919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.802949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.812704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.812842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.812867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.812889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.812904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.812932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.822730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.822867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.822900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.822915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.822928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.822956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.832767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.832949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.832974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.832988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.833001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.833029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.842797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.842942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.842967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.842981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.842994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.843022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.852847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.853010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.853035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.853049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.853062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.853090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.862845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.862981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.863005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.863020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.863033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.863060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-07-15 13:17:23.872891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-07-15 13:17:23.873032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-07-15 13:17:23.873057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-07-15 13:17:23.873071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-07-15 13:17:23.873084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.224 [2024-07-15 13:17:23.873112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-07-15 13:17:23.882906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-07-15 13:17:23.883034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-07-15 13:17:23.883060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-07-15 13:17:23.883075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-07-15 13:17:23.883088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.225 [2024-07-15 13:17:23.883115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-07-15 13:17:23.892979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-07-15 13:17:23.893109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-07-15 13:17:23.893134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-07-15 13:17:23.893148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-07-15 13:17:23.893167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.225 [2024-07-15 13:17:23.893194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-07-15 13:17:23.902987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-07-15 13:17:23.903108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-07-15 13:17:23.903133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-07-15 13:17:23.903148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-07-15 13:17:23.903161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.225 [2024-07-15 13:17:23.903188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-07-15 13:17:23.912988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-07-15 13:17:23.913113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-07-15 13:17:23.913139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-07-15 13:17:23.913153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-07-15 13:17:23.913166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.225 [2024-07-15 13:17:23.913193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.484 [2024-07-15 13:17:23.923021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.484 [2024-07-15 13:17:23.923151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.484 [2024-07-15 13:17:23.923176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.484 [2024-07-15 13:17:23.923190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.484 [2024-07-15 13:17:23.923203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.484 [2024-07-15 13:17:23.923231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-07-15 13:17:23.933070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.484 [2024-07-15 13:17:23.933210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.484 [2024-07-15 13:17:23.933235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.484 [2024-07-15 13:17:23.933249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.484 [2024-07-15 13:17:23.933262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.484 [2024-07-15 13:17:23.933290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-07-15 13:17:23.943071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.484 [2024-07-15 13:17:23.943216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.484 [2024-07-15 13:17:23.943241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.484 [2024-07-15 13:17:23.943255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.484 [2024-07-15 13:17:23.943268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.484 [2024-07-15 13:17:23.943295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-07-15 13:17:23.953138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.484 [2024-07-15 13:17:23.953284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.484 [2024-07-15 13:17:23.953309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.484 [2024-07-15 13:17:23.953323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.484 [2024-07-15 13:17:23.953336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.484 [2024-07-15 13:17:23.953364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-07-15 13:17:23.963114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.484 [2024-07-15 13:17:23.963238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.484 [2024-07-15 13:17:23.963262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.484 [2024-07-15 13:17:23.963276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.484 [2024-07-15 13:17:23.963289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.484 [2024-07-15 13:17:23.963316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-07-15 13:17:23.973224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.484 [2024-07-15 13:17:23.973375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.484 [2024-07-15 13:17:23.973401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.484 [2024-07-15 13:17:23.973415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.484 [2024-07-15 13:17:23.973428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.484 [2024-07-15 13:17:23.973455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-07-15 13:17:23.983178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.484 [2024-07-15 13:17:23.983303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.484 [2024-07-15 13:17:23.983328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:23.983348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:23.983362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:23.983389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:23.993219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:23.993411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:23.993436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:23.993451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:23.993464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:23.993490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.003248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.003376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.003401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.003416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.003429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.003456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.013298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.013460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.013484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.013498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.013511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.013538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.023275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.023403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.023428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.023442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.023455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.023482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.033308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.033449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.033475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.033489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.033502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.033529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.043375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.043501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.043527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.043541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.043554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.043581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.053398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.053529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.053554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.053569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.053582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.053609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.063431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.063557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.063582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.063596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.063609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.063636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.073454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.073626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.073652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.073672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.073686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.073713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.083460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.083588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.083613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.083628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.083641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.083668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.093484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.093624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.093649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.093663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.485 [2024-07-15 13:17:24.093676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.485 [2024-07-15 13:17:24.093704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-07-15 13:17:24.103524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.485 [2024-07-15 13:17:24.103652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.485 [2024-07-15 13:17:24.103678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.485 [2024-07-15 13:17:24.103692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.103704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.103732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-07-15 13:17:24.113544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.486 [2024-07-15 13:17:24.113668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.486 [2024-07-15 13:17:24.113692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.486 [2024-07-15 13:17:24.113706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.113718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.113745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-07-15 13:17:24.123574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.486 [2024-07-15 13:17:24.123697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.486 [2024-07-15 13:17:24.123723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.486 [2024-07-15 13:17:24.123737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.123750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.123777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-07-15 13:17:24.133614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.486 [2024-07-15 13:17:24.133744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.486 [2024-07-15 13:17:24.133770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.486 [2024-07-15 13:17:24.133784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.133797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.133824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-07-15 13:17:24.143617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.486 [2024-07-15 13:17:24.143752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.486 [2024-07-15 13:17:24.143777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.486 [2024-07-15 13:17:24.143791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.143803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.143830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-07-15 13:17:24.153640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.486 [2024-07-15 13:17:24.153762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.486 [2024-07-15 13:17:24.153787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.486 [2024-07-15 13:17:24.153801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.153814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.153841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-07-15 13:17:24.163683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.486 [2024-07-15 13:17:24.163811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.486 [2024-07-15 13:17:24.163836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.486 [2024-07-15 13:17:24.163856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.163869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.163906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-07-15 13:17:24.173699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.486 [2024-07-15 13:17:24.173830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.486 [2024-07-15 13:17:24.173855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.486 [2024-07-15 13:17:24.173869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.486 [2024-07-15 13:17:24.173889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.486 [2024-07-15 13:17:24.173918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.745 [2024-07-15 13:17:24.183733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.745 [2024-07-15 13:17:24.183867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.745 [2024-07-15 13:17:24.183900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.745 [2024-07-15 13:17:24.183916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.745 [2024-07-15 13:17:24.183929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.745 [2024-07-15 13:17:24.183957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.745 qpair failed and we were unable to recover it. 00:25:02.745 [2024-07-15 13:17:24.193772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.745 [2024-07-15 13:17:24.193903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.745 [2024-07-15 13:17:24.193929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.745 [2024-07-15 13:17:24.193944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.745 [2024-07-15 13:17:24.193956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.745 [2024-07-15 13:17:24.193984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.745 qpair failed and we were unable to recover it. 00:25:02.745 [2024-07-15 13:17:24.203785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.745 [2024-07-15 13:17:24.203925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.745 [2024-07-15 13:17:24.203950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.745 [2024-07-15 13:17:24.203964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.745 [2024-07-15 13:17:24.203977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.745 [2024-07-15 13:17:24.204005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.745 qpair failed and we were unable to recover it. 00:25:02.745 [2024-07-15 13:17:24.213840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.745 [2024-07-15 13:17:24.214010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.745 [2024-07-15 13:17:24.214035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.745 [2024-07-15 13:17:24.214050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.745 [2024-07-15 13:17:24.214063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.745 [2024-07-15 13:17:24.214090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.745 qpair failed and we were unable to recover it. 00:25:02.745 [2024-07-15 13:17:24.223840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.745 [2024-07-15 13:17:24.224010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.745 [2024-07-15 13:17:24.224036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.745 [2024-07-15 13:17:24.224051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.745 [2024-07-15 13:17:24.224063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.745 [2024-07-15 13:17:24.224091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.745 qpair failed and we were unable to recover it. 00:25:02.745 [2024-07-15 13:17:24.233914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.234081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.234106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.234120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.234133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.234161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.243932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.244058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.244083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.244097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.244110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.244138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.253941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.254075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.254105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.254121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.254134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.254162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.263962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.264094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.264119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.264134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.264147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.264174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.273988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.274116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.274141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.274155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.274168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.274195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.284017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.284147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.284173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.284187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.284200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.284227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.294189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.294329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.294354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.294368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.294381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.294408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.304059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.304233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.304258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.304272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.304285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.304313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.314095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.314223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.314248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.314262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.314275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.314302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.324221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.324399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.324424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.324439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.324452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.324479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.334196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.334326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.334351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.334365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.334377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.334406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.344222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.344404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.344435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.344450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.344464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.344491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.354325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.354454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.354479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.354494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.354507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.354535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.364218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.364347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.364373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.364387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.746 [2024-07-15 13:17:24.364400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.746 [2024-07-15 13:17:24.364427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.746 qpair failed and we were unable to recover it. 00:25:02.746 [2024-07-15 13:17:24.374266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.746 [2024-07-15 13:17:24.374402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.746 [2024-07-15 13:17:24.374428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.746 [2024-07-15 13:17:24.374442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.747 [2024-07-15 13:17:24.374454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.747 [2024-07-15 13:17:24.374481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.747 qpair failed and we were unable to recover it. 00:25:02.747 [2024-07-15 13:17:24.384276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.747 [2024-07-15 13:17:24.384404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.747 [2024-07-15 13:17:24.384429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.747 [2024-07-15 13:17:24.384443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.747 [2024-07-15 13:17:24.384456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.747 [2024-07-15 13:17:24.384491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.747 qpair failed and we were unable to recover it. 00:25:02.747 [2024-07-15 13:17:24.394328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.747 [2024-07-15 13:17:24.394452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.747 [2024-07-15 13:17:24.394477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.747 [2024-07-15 13:17:24.394491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.747 [2024-07-15 13:17:24.394504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.747 [2024-07-15 13:17:24.394532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.747 qpair failed and we were unable to recover it. 00:25:02.747 [2024-07-15 13:17:24.404412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.747 [2024-07-15 13:17:24.404549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.747 [2024-07-15 13:17:24.404574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.747 [2024-07-15 13:17:24.404588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.747 [2024-07-15 13:17:24.404601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.747 [2024-07-15 13:17:24.404629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.747 qpair failed and we were unable to recover it. 00:25:02.747 [2024-07-15 13:17:24.414404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.747 [2024-07-15 13:17:24.414539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.747 [2024-07-15 13:17:24.414565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.747 [2024-07-15 13:17:24.414579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.747 [2024-07-15 13:17:24.414592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.747 [2024-07-15 13:17:24.414620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.747 qpair failed and we were unable to recover it. 00:25:02.747 [2024-07-15 13:17:24.424428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.747 [2024-07-15 13:17:24.424578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.747 [2024-07-15 13:17:24.424602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.747 [2024-07-15 13:17:24.424616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.747 [2024-07-15 13:17:24.424629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.747 [2024-07-15 13:17:24.424657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.747 qpair failed and we were unable to recover it. 00:25:02.747 [2024-07-15 13:17:24.434470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.747 [2024-07-15 13:17:24.434598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.747 [2024-07-15 13:17:24.434628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.747 [2024-07-15 13:17:24.434643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.747 [2024-07-15 13:17:24.434656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:02.747 [2024-07-15 13:17:24.434684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.747 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-15 13:17:24.444463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.008 [2024-07-15 13:17:24.444649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.008 [2024-07-15 13:17:24.444675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.008 [2024-07-15 13:17:24.444696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.008 [2024-07-15 13:17:24.444708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.008 [2024-07-15 13:17:24.444735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-15 13:17:24.454514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.008 [2024-07-15 13:17:24.454693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.008 [2024-07-15 13:17:24.454717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.008 [2024-07-15 13:17:24.454732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.008 [2024-07-15 13:17:24.454745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.008 [2024-07-15 13:17:24.454772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-15 13:17:24.464499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.008 [2024-07-15 13:17:24.464631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.008 [2024-07-15 13:17:24.464656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.008 [2024-07-15 13:17:24.464670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.008 [2024-07-15 13:17:24.464683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.008 [2024-07-15 13:17:24.464710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-15 13:17:24.474588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.008 [2024-07-15 13:17:24.474712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.008 [2024-07-15 13:17:24.474737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.008 [2024-07-15 13:17:24.474752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.008 [2024-07-15 13:17:24.474765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.008 [2024-07-15 13:17:24.474802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-15 13:17:24.484604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.008 [2024-07-15 13:17:24.484771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.008 [2024-07-15 13:17:24.484796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.008 [2024-07-15 13:17:24.484811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.008 [2024-07-15 13:17:24.484824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.008 [2024-07-15 13:17:24.484851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-15 13:17:24.494612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.008 [2024-07-15 13:17:24.494744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.008 [2024-07-15 13:17:24.494769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.008 [2024-07-15 13:17:24.494783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.008 [2024-07-15 13:17:24.494796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.008 [2024-07-15 13:17:24.494824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.008 qpair failed and we were unable to recover it. 00:25:03.008 [2024-07-15 13:17:24.504606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.504726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.504751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.504765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.504778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.504805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.514672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.514819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.514844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.514858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.514871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.514906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.524669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.524819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.524849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.524864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.524883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.524914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.534719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.534852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.534883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.534900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.534912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.534942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.544717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.544888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.544914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.544928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.544941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.544969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.554772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.554922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.554947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.554961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.554974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.555002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.564765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.564936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.564961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.564975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.564993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.565021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.574861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.575031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.575056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.575070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.575083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.575110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.584872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.585034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.585059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.585074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.585086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.585114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.594870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.595037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.595062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.595076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.595089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.595117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.604995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.605125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.605150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.605164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.605177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.605204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.614956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.615129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.615154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.615168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.615181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.615208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.624942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.625089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.625114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.625129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.625142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.625169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.634994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.635122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.635147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.635161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.009 [2024-07-15 13:17:24.635174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.009 [2024-07-15 13:17:24.635201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.009 qpair failed and we were unable to recover it. 00:25:03.009 [2024-07-15 13:17:24.644990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.009 [2024-07-15 13:17:24.645113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.009 [2024-07-15 13:17:24.645137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.009 [2024-07-15 13:17:24.645151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.010 [2024-07-15 13:17:24.645164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.010 [2024-07-15 13:17:24.645191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-15 13:17:24.655076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.010 [2024-07-15 13:17:24.655247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.010 [2024-07-15 13:17:24.655271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.010 [2024-07-15 13:17:24.655285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.010 [2024-07-15 13:17:24.655304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.010 [2024-07-15 13:17:24.655332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-15 13:17:24.665152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.010 [2024-07-15 13:17:24.665280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.010 [2024-07-15 13:17:24.665306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.010 [2024-07-15 13:17:24.665326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.010 [2024-07-15 13:17:24.665340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.010 [2024-07-15 13:17:24.665368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-15 13:17:24.675108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.010 [2024-07-15 13:17:24.675237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.010 [2024-07-15 13:17:24.675263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.010 [2024-07-15 13:17:24.675277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.010 [2024-07-15 13:17:24.675290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.010 [2024-07-15 13:17:24.675317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-15 13:17:24.685108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.010 [2024-07-15 13:17:24.685235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.010 [2024-07-15 13:17:24.685260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.010 [2024-07-15 13:17:24.685274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.010 [2024-07-15 13:17:24.685286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.010 [2024-07-15 13:17:24.685314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-15 13:17:24.695161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.010 [2024-07-15 13:17:24.695298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.010 [2024-07-15 13:17:24.695323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.010 [2024-07-15 13:17:24.695337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.010 [2024-07-15 13:17:24.695350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.010 [2024-07-15 13:17:24.695378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.010 [2024-07-15 13:17:24.705194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.010 [2024-07-15 13:17:24.705375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.010 [2024-07-15 13:17:24.705400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.010 [2024-07-15 13:17:24.705414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.010 [2024-07-15 13:17:24.705427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.010 [2024-07-15 13:17:24.705455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.010 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 13:17:24.715222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.271 [2024-07-15 13:17:24.715352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.271 [2024-07-15 13:17:24.715378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.271 [2024-07-15 13:17:24.715392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.271 [2024-07-15 13:17:24.715405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.271 [2024-07-15 13:17:24.715432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 13:17:24.725297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.271 [2024-07-15 13:17:24.725454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.271 [2024-07-15 13:17:24.725480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.725494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.725506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.725533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.735353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.735492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.735518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.735532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.735545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.735573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.745349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.745479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.745504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.745518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.745537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.745566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.755322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.755451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.755476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.755491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.755504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.755530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.765348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.765467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.765492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.765506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.765519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.765547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.775434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.775589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.775614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.775628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.775641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.775668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.785435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.785564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.785589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.785604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.785617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.785644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.795436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.795586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.795611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.795625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.795638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.795665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.805491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.805638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.805666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.805681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.805694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.805723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.815528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.815701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.815727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.815741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.815754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.815781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.825517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.825643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.825669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.825683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.825696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.825725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.835564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.835691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.835716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.835737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.835753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.835781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.845579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.845705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.845731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.845745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.845758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.845786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.855752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.855902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.855933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.855947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.855960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.855988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.865657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.865833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.865858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.865873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.865894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.272 [2024-07-15 13:17:24.865922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.272 qpair failed and we were unable to recover it. 00:25:03.272 [2024-07-15 13:17:24.875673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.272 [2024-07-15 13:17:24.875822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.272 [2024-07-15 13:17:24.875846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.272 [2024-07-15 13:17:24.875861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.272 [2024-07-15 13:17:24.875873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.875910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.885720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.885867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.885901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.885921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.885936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.885965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.895727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.895860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.895892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.895907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.895920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.895948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.905762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.905895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.905921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.905935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.905948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.905976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.915755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.915885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.915911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.915925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.915938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.915966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.925784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.925923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.925949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.925969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.925984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.926011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.935965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.936146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.936171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.936185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.936198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.936226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.945845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.945993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.946018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.946032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.946046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.946073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.955885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.956016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.956041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.956055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.956068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.956096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.273 [2024-07-15 13:17:24.965935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.273 [2024-07-15 13:17:24.966062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.273 [2024-07-15 13:17:24.966086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.273 [2024-07-15 13:17:24.966100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.273 [2024-07-15 13:17:24.966113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.273 [2024-07-15 13:17:24.966141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.273 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:24.976001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:24.976136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:24.976161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:24.976176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:24.976189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.533 [2024-07-15 13:17:24.976216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.533 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:24.985993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:24.986129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:24.986154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:24.986169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:24.986182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.533 [2024-07-15 13:17:24.986208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.533 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:24.996034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:24.996163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:24.996188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:24.996202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:24.996215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.533 [2024-07-15 13:17:24.996245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.533 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:25.006053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:25.006186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:25.006212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:25.006231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:25.006245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.533 [2024-07-15 13:17:25.006273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.533 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:25.016081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:25.016250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:25.016281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:25.016296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:25.016309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.533 [2024-07-15 13:17:25.016337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.533 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:25.026115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:25.026246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:25.026272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:25.026286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:25.026299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.533 [2024-07-15 13:17:25.026327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.533 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:25.036133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:25.036268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:25.036293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:25.036308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:25.036320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.533 [2024-07-15 13:17:25.036347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.533 qpair failed and we were unable to recover it. 00:25:03.533 [2024-07-15 13:17:25.046200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.533 [2024-07-15 13:17:25.046367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.533 [2024-07-15 13:17:25.046393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.533 [2024-07-15 13:17:25.046407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.533 [2024-07-15 13:17:25.046420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.046447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.056200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.056332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.056357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.056371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.056385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.056412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.066312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.066438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.066462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.066477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.066488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.066516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.076249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.076377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.076403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.076417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.076430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.076458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.086349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.086475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.086500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.086515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.086528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.086556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.096310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.096440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.096465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.096479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.096492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.096519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.106359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.106490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.106521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.106537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.106551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.106578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.116350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.116472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.116496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.116510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.116521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.116548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.126382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.126515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.126540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.126554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.126567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.126594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.136446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.136575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.136600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.136614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.136627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.136655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.146493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.146628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.146654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.146668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.146681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.146714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.156479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.156605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.156629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.156644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.156656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.156684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.166509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.166637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.166664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.166683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.166697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.166726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.176545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.176677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.176702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.176717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.176730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.176758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.186566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.186697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.534 [2024-07-15 13:17:25.186722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.534 [2024-07-15 13:17:25.186737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.534 [2024-07-15 13:17:25.186750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.534 [2024-07-15 13:17:25.186777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.534 qpair failed and we were unable to recover it. 00:25:03.534 [2024-07-15 13:17:25.196614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.534 [2024-07-15 13:17:25.196741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.535 [2024-07-15 13:17:25.196771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.535 [2024-07-15 13:17:25.196786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.535 [2024-07-15 13:17:25.196799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.535 [2024-07-15 13:17:25.196827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.535 qpair failed and we were unable to recover it. 00:25:03.535 [2024-07-15 13:17:25.206634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.535 [2024-07-15 13:17:25.206771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.535 [2024-07-15 13:17:25.206796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.535 [2024-07-15 13:17:25.206810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.535 [2024-07-15 13:17:25.206823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.535 [2024-07-15 13:17:25.206850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.535 qpair failed and we were unable to recover it. 00:25:03.535 [2024-07-15 13:17:25.216654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.535 [2024-07-15 13:17:25.216783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.535 [2024-07-15 13:17:25.216807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.535 [2024-07-15 13:17:25.216822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.535 [2024-07-15 13:17:25.216835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.535 [2024-07-15 13:17:25.216862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.535 qpair failed and we were unable to recover it. 00:25:03.535 [2024-07-15 13:17:25.226692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.535 [2024-07-15 13:17:25.226823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.535 [2024-07-15 13:17:25.226848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.535 [2024-07-15 13:17:25.226862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.535 [2024-07-15 13:17:25.226882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.535 [2024-07-15 13:17:25.226912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.535 qpair failed and we were unable to recover it. 00:25:03.795 [2024-07-15 13:17:25.236700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.795 [2024-07-15 13:17:25.236828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.795 [2024-07-15 13:17:25.236853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.795 [2024-07-15 13:17:25.236868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.795 [2024-07-15 13:17:25.236887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.795 [2024-07-15 13:17:25.236926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.795 qpair failed and we were unable to recover it. 00:25:03.795 [2024-07-15 13:17:25.246741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.246914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.246939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.246953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.246966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.246993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.256766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.256906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.256931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.256945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.256958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.256985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.266832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.266970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.266995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.267009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.267022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.267050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.276811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.276945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.276971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.276985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.276999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.277027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.286851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.287028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.287059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.287074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.287087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.287114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.296900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.297028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.297052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.297066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.297078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.297106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.306941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.307105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.307130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.307144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.307157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.307185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.316942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.317121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.317146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.317160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.317173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.317201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.326980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.327152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.327178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.327193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.327213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.327242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.337054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.337232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.337257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.337271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.337284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.337312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.347024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.347153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.347178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.347192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.347205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.347232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.357067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.357208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.357233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.357247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.357260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.357287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.367084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.367218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.367243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.367258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.367270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.367298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.377127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.377276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.796 [2024-07-15 13:17:25.377302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.796 [2024-07-15 13:17:25.377316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.796 [2024-07-15 13:17:25.377329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.796 [2024-07-15 13:17:25.377356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.796 qpair failed and we were unable to recover it. 00:25:03.796 [2024-07-15 13:17:25.387152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.796 [2024-07-15 13:17:25.387282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.387306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.387320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.387333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.387361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.397174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.397318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.397343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.397357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.397370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.397397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.407276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.407403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.407429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.407443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.407456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.407483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.417233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.417371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.417397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.417411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.417429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.417460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.427242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.427365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.427391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.427405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.427418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.427445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.437289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.437459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.437484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.437498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.437511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.437539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.447363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.447507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.447533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.447547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.447560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.447587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.457342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.457474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.457500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.457514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.457527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.457555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.467494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.467628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.467653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.467668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.467680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.467708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.477488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.477618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.477644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.477658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.477671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.477698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:03.797 [2024-07-15 13:17:25.487437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.797 [2024-07-15 13:17:25.487575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.797 [2024-07-15 13:17:25.487600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.797 [2024-07-15 13:17:25.487615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.797 [2024-07-15 13:17:25.487628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:03.797 [2024-07-15 13:17:25.487655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.797 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.497539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.497702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.497727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.497741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.497754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.497782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.507551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.507699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.507724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.507739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.507757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.507785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.517506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.517632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.517657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.517671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.517684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.517711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.527554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.527735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.527760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.527775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.527788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.527815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.537574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.537704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.537729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.537743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.537757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.537785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.547581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.547737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.547763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.547777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.547791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.547818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.557621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.557769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.557794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.557808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.557822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.557850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-15 13:17:25.567655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.056 [2024-07-15 13:17:25.567794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.056 [2024-07-15 13:17:25.567820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.056 [2024-07-15 13:17:25.567834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.056 [2024-07-15 13:17:25.567848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.056 [2024-07-15 13:17:25.567881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.577709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.577842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.577867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.577893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.577908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.577936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.587705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.587889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.587915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.587929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.587942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.587969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.597766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.597931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.597956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.597976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.597989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.598017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.607762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.607901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.607927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.607941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.607954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.607982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.617804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.617944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.617969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.617983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.617996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.618024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.627909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.628047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.628072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.628086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.628099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.628127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.637857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.638012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.638037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.638051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.638064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.638091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.647892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.648019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.648044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.648058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.648072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.648099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.657939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.658081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.658106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.658120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.658133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.658160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.667928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.668067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.668092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.668106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.668119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.668147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.677971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.678097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.678122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.678137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.678149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.678177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.687998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.688125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.688150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.688170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.688184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.688212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.698136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.698265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.698290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.698304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.698317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.698345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.708127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.057 [2024-07-15 13:17:25.708300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.057 [2024-07-15 13:17:25.708325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.057 [2024-07-15 13:17:25.708339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.057 [2024-07-15 13:17:25.708352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.057 [2024-07-15 13:17:25.708380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-15 13:17:25.718074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.058 [2024-07-15 13:17:25.718198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.058 [2024-07-15 13:17:25.718223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.058 [2024-07-15 13:17:25.718237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.058 [2024-07-15 13:17:25.718250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.058 [2024-07-15 13:17:25.718277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-15 13:17:25.728140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.058 [2024-07-15 13:17:25.728268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.058 [2024-07-15 13:17:25.728293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.058 [2024-07-15 13:17:25.728307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.058 [2024-07-15 13:17:25.728320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.058 [2024-07-15 13:17:25.728348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-15 13:17:25.738197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.058 [2024-07-15 13:17:25.738330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.058 [2024-07-15 13:17:25.738355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.058 [2024-07-15 13:17:25.738369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.058 [2024-07-15 13:17:25.738382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.058 [2024-07-15 13:17:25.738409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-15 13:17:25.748192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.058 [2024-07-15 13:17:25.748365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.058 [2024-07-15 13:17:25.748390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.058 [2024-07-15 13:17:25.748404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.058 [2024-07-15 13:17:25.748417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.058 [2024-07-15 13:17:25.748445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.316 [2024-07-15 13:17:25.758242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.316 [2024-07-15 13:17:25.758399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.316 [2024-07-15 13:17:25.758423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.316 [2024-07-15 13:17:25.758437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.316 [2024-07-15 13:17:25.758450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.316 [2024-07-15 13:17:25.758477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.316 qpair failed and we were unable to recover it. 00:25:04.316 [2024-07-15 13:17:25.768215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.316 [2024-07-15 13:17:25.768355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.316 [2024-07-15 13:17:25.768380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.316 [2024-07-15 13:17:25.768394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.316 [2024-07-15 13:17:25.768407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.316 [2024-07-15 13:17:25.768434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.316 qpair failed and we were unable to recover it. 00:25:04.316 [2024-07-15 13:17:25.778247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.316 [2024-07-15 13:17:25.778384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.316 [2024-07-15 13:17:25.778409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.316 [2024-07-15 13:17:25.778429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.316 [2024-07-15 13:17:25.778443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.316 [2024-07-15 13:17:25.778471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.316 qpair failed and we were unable to recover it. 00:25:04.316 [2024-07-15 13:17:25.788297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.316 [2024-07-15 13:17:25.788425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.316 [2024-07-15 13:17:25.788450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.316 [2024-07-15 13:17:25.788465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.788478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.788505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.798312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.798436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.798461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.798475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.798487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.798517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.808386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.808559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.808584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.808598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.808611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.808638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.818387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.818513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.818538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.818552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.818565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.818592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.828403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.828574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.828599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.828613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.828627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.828655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.838409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.838537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.838561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.838575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.838588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.838615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.848455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.848580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.848606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.848620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.848633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.848660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.858505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.858631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.858655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.858669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.858682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.858709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.868526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.868653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.868683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.868698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.868711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.868738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.878526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.878695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.878720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.878735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.878748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.878775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.888538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.888662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.888687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.888701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.888715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.888742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.898628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.898758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.898783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.898798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.898810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.898838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.908625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.908756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.908781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.908795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.908808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.908841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.918647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.918796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.317 [2024-07-15 13:17:25.918820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.317 [2024-07-15 13:17:25.918835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.317 [2024-07-15 13:17:25.918848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.317 [2024-07-15 13:17:25.918882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.317 qpair failed and we were unable to recover it. 00:25:04.317 [2024-07-15 13:17:25.928661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.317 [2024-07-15 13:17:25.928785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.928809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.928824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.928836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.928864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:25.938703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:25.938830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.938855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.938869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.938889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.938918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:25.948732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:25.948864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.948897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.948912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.948925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.948952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:25.958766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:25.958909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.958940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.958955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.958968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.958995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:25.968799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:25.968953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.968988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.969003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.969017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.969045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:25.978840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:25.979003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.979030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.979044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.979061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.979091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:25.988838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:25.988969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.988994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.989009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.989022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.989049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:25.998887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:25.999011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:25.999036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:25.999050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:25.999063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:25.999101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.318 [2024-07-15 13:17:26.008932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.318 [2024-07-15 13:17:26.009054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.318 [2024-07-15 13:17:26.009078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.318 [2024-07-15 13:17:26.009093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.318 [2024-07-15 13:17:26.009106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.318 [2024-07-15 13:17:26.009134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.318 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.018962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.019108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.019133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.019147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.019160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.019187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.028979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.029101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.029126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.029140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.029153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.029181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.039021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.039200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.039225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.039240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.039252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.039280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.049017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.049145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.049175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.049190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.049203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.049230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.059058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.059213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.059238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.059252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.059265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.059292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.069108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.069280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.069305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.069319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.069331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.069358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.079105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.079229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.079254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.079268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.079281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.079308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.089153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.089281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.089306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.089320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.089333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.089366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.099198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.099352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.099379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.099394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.099415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.099448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.109232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.109365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.109391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.109406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.109418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.109446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.119229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.576 [2024-07-15 13:17:26.119352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.576 [2024-07-15 13:17:26.119376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.576 [2024-07-15 13:17:26.119390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.576 [2024-07-15 13:17:26.119402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.576 [2024-07-15 13:17:26.119429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.576 qpair failed and we were unable to recover it. 00:25:04.576 [2024-07-15 13:17:26.129392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.129525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.129550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.129565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.129578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.129606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.139293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.139425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.139455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.139470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.139483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.139510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.149318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.149450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.149475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.149489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.149502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.149529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.159333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.159494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.159520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.159534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.159547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.159574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.169374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.169499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.169525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.169539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.169552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.169579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.179441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.179572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.179598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.179612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.179631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.179660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.189496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.189661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.189687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.189701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.189714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.189741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.199519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.199647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.199672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.199686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.199699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.199727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.209484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.209622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.209647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.209661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.209674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.209701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.219618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.219750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.219774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.219788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.219801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.219829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.229557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.229695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.229720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.229735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.229747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.229775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.239594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.239723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.239748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.239762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.239775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.239802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.249642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.249774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.249799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.249813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.249825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.249853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.259639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.259770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.577 [2024-07-15 13:17:26.259795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.577 [2024-07-15 13:17:26.259809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.577 [2024-07-15 13:17:26.259822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.577 [2024-07-15 13:17:26.259849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.577 qpair failed and we were unable to recover it. 00:25:04.577 [2024-07-15 13:17:26.269678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.577 [2024-07-15 13:17:26.269807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.578 [2024-07-15 13:17:26.269832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.578 [2024-07-15 13:17:26.269846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.578 [2024-07-15 13:17:26.269864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.578 [2024-07-15 13:17:26.269899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.578 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.279681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.836 [2024-07-15 13:17:26.279805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.836 [2024-07-15 13:17:26.279830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.836 [2024-07-15 13:17:26.279844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.836 [2024-07-15 13:17:26.279857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.836 [2024-07-15 13:17:26.279890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.836 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.289765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.836 [2024-07-15 13:17:26.289899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.836 [2024-07-15 13:17:26.289925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.836 [2024-07-15 13:17:26.289939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.836 [2024-07-15 13:17:26.289953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.836 [2024-07-15 13:17:26.289980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.836 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.299787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.836 [2024-07-15 13:17:26.299923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.836 [2024-07-15 13:17:26.299948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.836 [2024-07-15 13:17:26.299962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.836 [2024-07-15 13:17:26.299974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.836 [2024-07-15 13:17:26.300002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.836 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.309799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.836 [2024-07-15 13:17:26.309937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.836 [2024-07-15 13:17:26.309962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.836 [2024-07-15 13:17:26.309977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.836 [2024-07-15 13:17:26.309989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.836 [2024-07-15 13:17:26.310017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.836 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.319798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.836 [2024-07-15 13:17:26.319959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.836 [2024-07-15 13:17:26.319984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.836 [2024-07-15 13:17:26.319998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.836 [2024-07-15 13:17:26.320010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.836 [2024-07-15 13:17:26.320038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.836 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.329853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.836 [2024-07-15 13:17:26.329988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.836 [2024-07-15 13:17:26.330013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.836 [2024-07-15 13:17:26.330028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.836 [2024-07-15 13:17:26.330041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.836 [2024-07-15 13:17:26.330068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.836 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.339918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.836 [2024-07-15 13:17:26.340077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.836 [2024-07-15 13:17:26.340102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.836 [2024-07-15 13:17:26.340116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.836 [2024-07-15 13:17:26.340129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.836 [2024-07-15 13:17:26.340156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.836 qpair failed and we were unable to recover it. 00:25:04.836 [2024-07-15 13:17:26.349896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.350031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.350056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.350070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.350083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.350111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.360025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.360163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.360188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.360208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.360221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.360250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.369990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.370152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.370177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.370192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.370204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.370232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.379976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.380107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.380132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.380146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.380159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.380186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.390000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.390125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.390150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.390163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.390176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.390203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.400124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.400249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.400274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.400288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.400301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.400328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.410079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.410246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.410272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.410286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.410299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.410327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.420127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.420267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.420294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.420314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.420328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.420357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.430151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.430279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.430304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.430318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.430331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.430359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.440135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.440262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.440288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.440302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.440315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.440343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.450222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.450354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.450380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.450406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.450420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.450450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.460218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.460350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.460375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.460389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.460403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.460433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.470299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.470426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.470455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.470470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.470484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.470513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.480290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.480413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.480439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.480454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.837 [2024-07-15 13:17:26.480467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.837 [2024-07-15 13:17:26.480495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.837 qpair failed and we were unable to recover it. 00:25:04.837 [2024-07-15 13:17:26.490283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.837 [2024-07-15 13:17:26.490408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.837 [2024-07-15 13:17:26.490434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.837 [2024-07-15 13:17:26.490448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.838 [2024-07-15 13:17:26.490461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.838 [2024-07-15 13:17:26.490488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.838 qpair failed and we were unable to recover it. 00:25:04.838 [2024-07-15 13:17:26.500350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.838 [2024-07-15 13:17:26.500480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.838 [2024-07-15 13:17:26.500505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.838 [2024-07-15 13:17:26.500519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.838 [2024-07-15 13:17:26.500532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.838 [2024-07-15 13:17:26.500560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.838 qpair failed and we were unable to recover it. 00:25:04.838 [2024-07-15 13:17:26.510353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.838 [2024-07-15 13:17:26.510483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.838 [2024-07-15 13:17:26.510509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.838 [2024-07-15 13:17:26.510524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.838 [2024-07-15 13:17:26.510536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.838 [2024-07-15 13:17:26.510564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.838 qpair failed and we were unable to recover it. 00:25:04.838 [2024-07-15 13:17:26.520382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.838 [2024-07-15 13:17:26.520507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.838 [2024-07-15 13:17:26.520532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.838 [2024-07-15 13:17:26.520546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.838 [2024-07-15 13:17:26.520559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.838 [2024-07-15 13:17:26.520587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.838 qpair failed and we were unable to recover it. 00:25:04.838 [2024-07-15 13:17:26.530403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.838 [2024-07-15 13:17:26.530531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.838 [2024-07-15 13:17:26.530556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.838 [2024-07-15 13:17:26.530570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.838 [2024-07-15 13:17:26.530583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:04.838 [2024-07-15 13:17:26.530610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.838 qpair failed and we were unable to recover it. 00:25:05.098 [2024-07-15 13:17:26.540463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.098 [2024-07-15 13:17:26.540596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.098 [2024-07-15 13:17:26.540621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.098 [2024-07-15 13:17:26.540641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.098 [2024-07-15 13:17:26.540655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.098 [2024-07-15 13:17:26.540683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.098 qpair failed and we were unable to recover it. 00:25:05.098 [2024-07-15 13:17:26.550493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.098 [2024-07-15 13:17:26.550661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.098 [2024-07-15 13:17:26.550687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.098 [2024-07-15 13:17:26.550702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.098 [2024-07-15 13:17:26.550719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.098 [2024-07-15 13:17:26.550748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.098 qpair failed and we were unable to recover it. 00:25:05.098 [2024-07-15 13:17:26.560492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.098 [2024-07-15 13:17:26.560618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.098 [2024-07-15 13:17:26.560644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.098 [2024-07-15 13:17:26.560658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.098 [2024-07-15 13:17:26.560671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.098 [2024-07-15 13:17:26.560699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.098 qpair failed and we were unable to recover it. 00:25:05.098 [2024-07-15 13:17:26.570510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.098 [2024-07-15 13:17:26.570643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.098 [2024-07-15 13:17:26.570667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.098 [2024-07-15 13:17:26.570681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.098 [2024-07-15 13:17:26.570694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.098 [2024-07-15 13:17:26.570721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.098 qpair failed and we were unable to recover it. 00:25:05.098 [2024-07-15 13:17:26.580548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.098 [2024-07-15 13:17:26.580689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.098 [2024-07-15 13:17:26.580714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.098 [2024-07-15 13:17:26.580728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.098 [2024-07-15 13:17:26.580741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.098 [2024-07-15 13:17:26.580769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.098 qpair failed and we were unable to recover it. 00:25:05.098 [2024-07-15 13:17:26.590582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.590750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.590775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.590789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.590802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.590830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.600623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.600750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.600775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.600789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.600802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.600830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.610636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.610755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.610780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.610795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.610807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.610835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.620665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.620797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.620822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.620836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.620850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.620883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.630690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.630825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.630855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.630870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.630890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.630920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.640748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.640874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.640907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.640922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.640935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.640962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.650746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.650886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.650912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.650926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.650940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.650968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.660779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.660930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.660956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.660970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.660983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.661010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.670836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.670976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.671002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.671016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.671029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.671056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.680823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.680944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.680969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.680984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.680996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.681024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.690899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.691023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.691048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.691062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.691075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.691103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.700905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.701035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.701060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.701074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.701087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.701115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.710931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.711067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.711092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.711107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.711120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.711147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.720988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.721112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.721142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.721158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.721171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.721198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.731031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.731215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.731241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.731262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.099 [2024-07-15 13:17:26.731275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.099 [2024-07-15 13:17:26.731304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.099 qpair failed and we were unable to recover it. 00:25:05.099 [2024-07-15 13:17:26.741059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.099 [2024-07-15 13:17:26.741211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.099 [2024-07-15 13:17:26.741235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.099 [2024-07-15 13:17:26.741249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.100 [2024-07-15 13:17:26.741263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.100 [2024-07-15 13:17:26.741291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.100 qpair failed and we were unable to recover it. 00:25:05.100 [2024-07-15 13:17:26.751076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.100 [2024-07-15 13:17:26.751219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.100 [2024-07-15 13:17:26.751244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.100 [2024-07-15 13:17:26.751258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.100 [2024-07-15 13:17:26.751270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.100 [2024-07-15 13:17:26.751298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.100 qpair failed and we were unable to recover it. 00:25:05.100 [2024-07-15 13:17:26.761103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.100 [2024-07-15 13:17:26.761291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.100 [2024-07-15 13:17:26.761317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.100 [2024-07-15 13:17:26.761331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.100 [2024-07-15 13:17:26.761344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.100 [2024-07-15 13:17:26.761381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.100 qpair failed and we were unable to recover it. 00:25:05.100 [2024-07-15 13:17:26.771257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.100 [2024-07-15 13:17:26.771429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.100 [2024-07-15 13:17:26.771458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.100 [2024-07-15 13:17:26.771473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.100 [2024-07-15 13:17:26.771486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.100 [2024-07-15 13:17:26.771515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.100 qpair failed and we were unable to recover it. 00:25:05.100 [2024-07-15 13:17:26.781233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.100 [2024-07-15 13:17:26.781363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.100 [2024-07-15 13:17:26.781389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.100 [2024-07-15 13:17:26.781403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.100 [2024-07-15 13:17:26.781416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.100 [2024-07-15 13:17:26.781444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.100 qpair failed and we were unable to recover it. 00:25:05.100 [2024-07-15 13:17:26.791221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.100 [2024-07-15 13:17:26.791355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.100 [2024-07-15 13:17:26.791381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.100 [2024-07-15 13:17:26.791395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.100 [2024-07-15 13:17:26.791408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.100 [2024-07-15 13:17:26.791436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.100 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.801198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.801325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.801351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.801365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.801378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.801405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.811344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.811511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.811542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.811557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.811570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.811598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.821282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.821453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.821479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.821493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.821505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.821533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.831273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.831402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.831427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.831441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.831454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.831481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.841391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.841527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.841553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.841567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.841580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.841607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.851327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.851461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.851487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.851501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.851514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.851547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.861382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.861554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.861579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.861593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.861606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.861633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.871412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.871544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.871570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.871584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.871596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.871624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.881416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.881540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.881564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.881579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.881592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.881619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.891487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.891637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.362 [2024-07-15 13:17:26.891661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.362 [2024-07-15 13:17:26.891676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.362 [2024-07-15 13:17:26.891688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.362 [2024-07-15 13:17:26.891716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.362 qpair failed and we were unable to recover it. 00:25:05.362 [2024-07-15 13:17:26.901461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.362 [2024-07-15 13:17:26.901589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.901620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.901635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.901648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.901675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.911505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.911627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.911652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.911667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.911679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.911707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.921620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.921745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.921770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.921784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.921797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.921825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.931592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.931757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.931782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.931796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.931809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.931836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.941625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.941781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.941806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.941820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.941838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.941866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.951690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.951812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.951837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.951851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.951864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.951897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.961651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.961770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.961796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.961810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.961823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.961851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.971694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.971857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.971888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.971904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.971918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.971945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.981712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.981856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.981887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.981903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.981916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.981944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:26.991747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:26.991898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:26.991923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:26.991938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:26.991950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:26.991978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:27.001741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:27.001872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:27.001904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:27.001919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:27.001931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:27.001961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:27.011867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:27.011997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:27.012022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:27.012036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:27.012049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:27.012076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:27.021805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:27.021935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:27.021960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:27.021974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:27.021986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.363 [2024-07-15 13:17:27.022014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.363 qpair failed and we were unable to recover it. 00:25:05.363 [2024-07-15 13:17:27.031926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.363 [2024-07-15 13:17:27.032072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.363 [2024-07-15 13:17:27.032097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.363 [2024-07-15 13:17:27.032111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.363 [2024-07-15 13:17:27.032129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.364 [2024-07-15 13:17:27.032158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.364 qpair failed and we were unable to recover it. 00:25:05.364 [2024-07-15 13:17:27.041849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.364 [2024-07-15 13:17:27.041984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.364 [2024-07-15 13:17:27.042009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.364 [2024-07-15 13:17:27.042023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.364 [2024-07-15 13:17:27.042036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.364 [2024-07-15 13:17:27.042063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.364 qpair failed and we were unable to recover it. 00:25:05.364 [2024-07-15 13:17:27.051882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.364 [2024-07-15 13:17:27.052009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.364 [2024-07-15 13:17:27.052034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.364 [2024-07-15 13:17:27.052048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.364 [2024-07-15 13:17:27.052061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.364 [2024-07-15 13:17:27.052088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.364 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.061953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.062107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.062132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.062146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.062160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.062188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.071971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.072147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.072173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.072187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.072199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.072226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.081971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.082105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.082130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.082144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.082158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.082185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.092004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.092126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.092151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.092165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.092178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.092205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.102051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.102186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.102215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.102230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.102244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.102272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.112080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.112208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.112233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.112247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.112260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.112288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.122095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.122239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.122263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.122276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.122294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.122322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.132238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.132369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.132395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.132410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.132423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.625 [2024-07-15 13:17:27.132450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.625 qpair failed and we were unable to recover it. 00:25:05.625 [2024-07-15 13:17:27.142159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.625 [2024-07-15 13:17:27.142290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.625 [2024-07-15 13:17:27.142314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.625 [2024-07-15 13:17:27.142329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.625 [2024-07-15 13:17:27.142341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.142369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.152167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.152304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.152330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.152344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.152357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.152385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.162188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.162308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.162334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.162348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.162361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.162388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.172224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.172347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.172372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.172386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.172399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.172427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.182243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.182374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.182399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.182413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.182426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.182453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.192288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.192421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.192447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.192461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.192474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.192500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.202308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.202427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.202452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.202467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.202479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.202506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.212431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.212560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.212586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.212606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.212620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.212647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.222372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.222512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.222537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.222550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.222563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.222591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.232446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.232634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.232660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.232674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.232687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.232714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.242469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.242598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.242623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.626 [2024-07-15 13:17:27.242637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.626 [2024-07-15 13:17:27.242649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.626 [2024-07-15 13:17:27.242676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.626 qpair failed and we were unable to recover it. 00:25:05.626 [2024-07-15 13:17:27.252449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.626 [2024-07-15 13:17:27.252585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.626 [2024-07-15 13:17:27.252610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.627 [2024-07-15 13:17:27.252624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.627 [2024-07-15 13:17:27.252636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.627 [2024-07-15 13:17:27.252664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.627 qpair failed and we were unable to recover it. 00:25:05.627 [2024-07-15 13:17:27.262483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.627 [2024-07-15 13:17:27.262615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.627 [2024-07-15 13:17:27.262640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.627 [2024-07-15 13:17:27.262654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.627 [2024-07-15 13:17:27.262666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.627 [2024-07-15 13:17:27.262694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.627 qpair failed and we were unable to recover it. 00:25:05.627 [2024-07-15 13:17:27.272563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.627 [2024-07-15 13:17:27.272697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.627 [2024-07-15 13:17:27.272722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.627 [2024-07-15 13:17:27.272736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.627 [2024-07-15 13:17:27.272749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.627 [2024-07-15 13:17:27.272777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.627 qpair failed and we were unable to recover it. 00:25:05.627 [2024-07-15 13:17:27.282534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.627 [2024-07-15 13:17:27.282659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.627 [2024-07-15 13:17:27.282683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.627 [2024-07-15 13:17:27.282697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.627 [2024-07-15 13:17:27.282710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.627 [2024-07-15 13:17:27.282738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.627 qpair failed and we were unable to recover it. 00:25:05.627 [2024-07-15 13:17:27.292552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.627 [2024-07-15 13:17:27.292704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.627 [2024-07-15 13:17:27.292730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.627 [2024-07-15 13:17:27.292745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.627 [2024-07-15 13:17:27.292758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13f2200 00:25:05.627 [2024-07-15 13:17:27.292786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.627 qpair failed and we were unable to recover it. 00:25:05.627 [2024-07-15 13:17:27.302606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.627 [2024-07-15 13:17:27.302757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.627 [2024-07-15 13:17:27.302790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.627 [2024-07-15 13:17:27.302812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.627 [2024-07-15 13:17:27.302827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e84000b90 00:25:05.627 [2024-07-15 13:17:27.302859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.627 qpair failed and we were unable to recover it. 00:25:05.627 [2024-07-15 13:17:27.312642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.627 [2024-07-15 13:17:27.312775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.627 [2024-07-15 13:17:27.312803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.627 [2024-07-15 13:17:27.312818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.627 [2024-07-15 13:17:27.312831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e84000b90 00:25:05.627 [2024-07-15 13:17:27.312861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.627 qpair failed and we were unable to recover it. 00:25:05.627 [2024-07-15 13:17:27.312972] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:05.627 A controller has encountered a failure and is being reset. 00:25:05.886 Controller properly reset. 00:25:05.886 Initializing NVMe Controllers 00:25:05.886 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:05.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:05.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:05.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:05.886 Initialization complete. Launching workers. 00:25:05.886 Starting thread on core 1 00:25:05.886 Starting thread on core 2 00:25:05.886 Starting thread on core 3 00:25:05.886 Starting thread on core 0 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:05.886 00:25:05.886 real 0m10.817s 00:25:05.886 user 0m17.902s 00:25:05.886 sys 0m5.360s 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.886 ************************************ 00:25:05.886 END TEST nvmf_target_disconnect_tc2 00:25:05.886 ************************************ 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.886 rmmod nvme_tcp 00:25:05.886 rmmod nvme_fabrics 00:25:05.886 rmmod nvme_keyring 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3937802 ']' 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3937802 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3937802 ']' 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3937802 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3937802 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3937802' 00:25:05.886 killing process with pid 3937802 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3937802 00:25:05.886 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3937802 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.144 13:17:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.673 13:17:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:08.673 00:25:08.673 real 0m15.592s 00:25:08.673 user 0m44.052s 00:25:08.673 sys 0m7.283s 00:25:08.673 13:17:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.673 13:17:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 ************************************ 00:25:08.673 END TEST nvmf_target_disconnect 00:25:08.673 ************************************ 00:25:08.673 13:17:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:08.673 13:17:29 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:08.673 13:17:29 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.673 13:17:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 13:17:29 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:08.673 00:25:08.673 real 19m39.927s 00:25:08.673 user 46m31.246s 00:25:08.673 sys 4m55.039s 00:25:08.673 13:17:29 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.673 13:17:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 ************************************ 00:25:08.673 END TEST nvmf_tcp 00:25:08.673 ************************************ 00:25:08.673 13:17:29 -- common/autotest_common.sh@1142 -- # return 0 00:25:08.673 13:17:29 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:08.673 13:17:29 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.673 13:17:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:08.673 13:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.673 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 ************************************ 00:25:08.673 START TEST spdkcli_nvmf_tcp 00:25:08.673 ************************************ 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.673 * Looking for test storage... 00:25:08.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.673 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3939000 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3939000 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3939000 ']' 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.674 13:17:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.674 [2024-07-15 13:17:30.037580] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:25:08.674 [2024-07-15 13:17:30.037656] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939000 ] 00:25:08.674 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.674 [2024-07-15 13:17:30.094731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:08.674 [2024-07-15 13:17:30.205411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.674 [2024-07-15 13:17:30.205430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.674 13:17:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:08.674 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:08.674 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:08.674 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:08.674 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:08.674 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:08.674 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:08.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:08.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:08.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:08.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:08.674 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:08.674 ' 00:25:11.208 [2024-07-15 13:17:32.876836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.581 [2024-07-15 13:17:34.101151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:15.115 [2024-07-15 13:17:36.384471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:17.015 [2024-07-15 13:17:38.354592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:18.388 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:18.388 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:18.388 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:18.388 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:18.388 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:18.388 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:18.388 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:18.388 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:18.388 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:18.388 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:18.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:18.388 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:18.388 13:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.957 13:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:18.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:18.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:18.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:18.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:18.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:18.957 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:18.957 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:18.957 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:18.957 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:18.957 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:18.957 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:18.957 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:18.957 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:18.957 ' 00:25:24.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:24.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:24.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:24.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:24.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:24.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:24.224 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:24.224 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:24.224 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:24.224 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:24.224 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:24.224 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:24.224 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:24.224 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3939000 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3939000 ']' 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3939000 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3939000 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3939000' 00:25:24.224 killing process with pid 3939000 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3939000 00:25:24.224 13:17:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3939000 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3939000 ']' 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3939000 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3939000 ']' 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3939000 00:25:24.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3939000) - No such process 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3939000 is not found' 00:25:24.483 Process with pid 3939000 is not found 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:24.483 00:25:24.483 real 0m16.085s 00:25:24.483 user 0m33.982s 00:25:24.483 sys 0m0.824s 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.483 13:17:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.483 ************************************ 00:25:24.483 END TEST spdkcli_nvmf_tcp 00:25:24.483 ************************************ 00:25:24.483 13:17:46 -- common/autotest_common.sh@1142 -- # return 0 00:25:24.483 13:17:46 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:24.483 13:17:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:24.483 13:17:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.483 13:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:24.483 ************************************ 00:25:24.483 START TEST nvmf_identify_passthru 00:25:24.483 ************************************ 00:25:24.483 13:17:46 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:24.483 * Looking for test storage... 00:25:24.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:24.483 13:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.483 13:17:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.483 13:17:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.483 13:17:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.483 13:17:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.483 13:17:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.483 13:17:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.483 13:17:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:24.483 13:17:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.483 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.483 13:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.484 13:17:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.484 13:17:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.484 13:17:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.484 13:17:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.484 13:17:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.484 13:17:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.484 13:17:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:24.484 13:17:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.484 13:17:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.484 13:17:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:24.484 13:17:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.484 13:17:46 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.484 13:17:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:26.389 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:26.389 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:26.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.389 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:26.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.390 13:17:47 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:25:26.390 00:25:26.390 --- 10.0.0.2 ping statistics --- 00:25:26.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.390 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:26.390 00:25:26.390 --- 10.0.0.1 ping statistics --- 00:25:26.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.390 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.390 13:17:48 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.390 13:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:26.390 13:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:26.390 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:26.650 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:26.650 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:26.650 13:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:26.650 13:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:26.650 13:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:26.650 13:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:26.650 13:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:26.650 13:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:26.650 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.834 13:17:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:30.834 13:17:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:30.834 13:17:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:30.834 13:17:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:30.834 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.039 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:35.039 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.039 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.039 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3943503 00:25:35.039 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:35.039 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:35.039 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3943503 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3943503 ']' 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.039 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.039 [2024-07-15 13:17:56.622310] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:25:35.039 [2024-07-15 13:17:56.622416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.039 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.039 [2024-07-15 13:17:56.688033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:35.297 [2024-07-15 13:17:56.800071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.297 [2024-07-15 13:17:56.800134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.297 [2024-07-15 13:17:56.800162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.297 [2024-07-15 13:17:56.800173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.297 [2024-07-15 13:17:56.800183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.297 [2024-07-15 13:17:56.800238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.297 [2024-07-15 13:17:56.800300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.297 [2024-07-15 13:17:56.800365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.297 [2024-07-15 13:17:56.800368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.297 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.297 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:35.297 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:35.297 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.297 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.297 INFO: Log level set to 20 00:25:35.297 INFO: Requests: 00:25:35.297 { 00:25:35.297 "jsonrpc": "2.0", 00:25:35.297 "method": "nvmf_set_config", 00:25:35.297 "id": 1, 00:25:35.297 "params": { 00:25:35.297 "admin_cmd_passthru": { 00:25:35.297 "identify_ctrlr": true 00:25:35.297 } 00:25:35.297 } 00:25:35.297 } 00:25:35.297 00:25:35.297 INFO: response: 00:25:35.297 { 00:25:35.297 "jsonrpc": "2.0", 00:25:35.297 "id": 1, 00:25:35.298 "result": true 00:25:35.298 } 00:25:35.298 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.298 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.298 INFO: Setting log level to 20 00:25:35.298 INFO: Setting log level to 20 00:25:35.298 INFO: Log level set to 20 00:25:35.298 INFO: Log level set to 20 00:25:35.298 INFO: Requests: 00:25:35.298 { 00:25:35.298 "jsonrpc": "2.0", 00:25:35.298 "method": "framework_start_init", 00:25:35.298 "id": 1 00:25:35.298 } 00:25:35.298 00:25:35.298 INFO: Requests: 00:25:35.298 { 00:25:35.298 "jsonrpc": "2.0", 00:25:35.298 "method": "framework_start_init", 00:25:35.298 "id": 1 00:25:35.298 } 00:25:35.298 00:25:35.298 [2024-07-15 13:17:56.941218] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:35.298 INFO: response: 00:25:35.298 { 00:25:35.298 "jsonrpc": "2.0", 00:25:35.298 "id": 1, 00:25:35.298 "result": true 00:25:35.298 } 00:25:35.298 00:25:35.298 INFO: response: 00:25:35.298 { 00:25:35.298 "jsonrpc": "2.0", 00:25:35.298 "id": 1, 00:25:35.298 "result": true 00:25:35.298 } 00:25:35.298 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.298 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.298 INFO: Setting log level to 40 00:25:35.298 INFO: Setting log level to 40 00:25:35.298 INFO: Setting log level to 40 00:25:35.298 [2024-07-15 13:17:56.951376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.298 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:35.298 13:17:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.298 13:17:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.576 Nvme0n1 00:25:38.576 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.577 13:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.577 13:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.577 13:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.577 [2024-07-15 13:17:59.846039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.577 13:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.577 [ 00:25:38.577 { 00:25:38.577 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:38.577 "subtype": "Discovery", 00:25:38.577 "listen_addresses": [], 00:25:38.577 "allow_any_host": true, 00:25:38.577 "hosts": [] 00:25:38.577 }, 00:25:38.577 { 00:25:38.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.577 "subtype": "NVMe", 00:25:38.577 "listen_addresses": [ 00:25:38.577 { 00:25:38.577 "trtype": "TCP", 00:25:38.577 "adrfam": "IPv4", 00:25:38.577 "traddr": "10.0.0.2", 00:25:38.577 "trsvcid": "4420" 00:25:38.577 } 00:25:38.577 ], 00:25:38.577 "allow_any_host": true, 00:25:38.577 "hosts": [], 00:25:38.577 "serial_number": "SPDK00000000000001", 00:25:38.577 "model_number": "SPDK bdev Controller", 00:25:38.577 "max_namespaces": 1, 00:25:38.577 "min_cntlid": 1, 00:25:38.577 "max_cntlid": 65519, 00:25:38.577 "namespaces": [ 00:25:38.577 { 00:25:38.577 "nsid": 1, 00:25:38.577 "bdev_name": "Nvme0n1", 00:25:38.577 "name": "Nvme0n1", 00:25:38.577 "nguid": "2EE62AD21CC642A0B3D0DFAE902C99FB", 00:25:38.577 "uuid": "2ee62ad2-1cc6-42a0-b3d0-dfae902c99fb" 00:25:38.577 } 00:25:38.577 ] 00:25:38.577 } 00:25:38.577 ] 00:25:38.577 13:17:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.577 13:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:38.577 13:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:38.577 13:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:38.577 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.577 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:38.577 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:38.577 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:38.577 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:38.577 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.835 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:38.835 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:38.835 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:38.835 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.835 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:38.835 13:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:38.835 rmmod nvme_tcp 00:25:38.835 rmmod nvme_fabrics 00:25:38.835 rmmod nvme_keyring 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3943503 ']' 00:25:38.835 13:18:00 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3943503 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3943503 ']' 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3943503 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3943503 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3943503' 00:25:38.835 killing process with pid 3943503 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3943503 00:25:38.835 13:18:00 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3943503 00:25:40.736 13:18:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.736 13:18:01 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.736 13:18:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.736 13:18:01 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.736 13:18:01 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.736 13:18:01 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.736 13:18:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:40.736 13:18:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.637 13:18:04 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.637 00:25:42.637 real 0m17.949s 00:25:42.637 user 0m26.895s 00:25:42.637 sys 0m2.238s 00:25:42.637 13:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:42.637 13:18:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:42.637 ************************************ 00:25:42.637 END TEST nvmf_identify_passthru 00:25:42.637 ************************************ 00:25:42.637 13:18:04 -- common/autotest_common.sh@1142 -- # return 0 00:25:42.637 13:18:04 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:42.637 13:18:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:42.637 13:18:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.637 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:25:42.637 ************************************ 00:25:42.637 START TEST nvmf_dif 00:25:42.637 ************************************ 00:25:42.637 13:18:04 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:42.637 * Looking for test storage... 00:25:42.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:42.637 13:18:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.637 13:18:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.638 13:18:04 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.638 13:18:04 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.638 13:18:04 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.638 13:18:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.638 13:18:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.638 13:18:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.638 13:18:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:42.638 13:18:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.638 13:18:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:42.638 13:18:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:42.638 13:18:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:42.638 13:18:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:42.638 13:18:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.638 13:18:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:42.638 13:18:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.638 13:18:04 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.638 13:18:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:44.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:44.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:44.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:44.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.540 13:18:06 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:44.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:25:44.541 00:25:44.541 --- 10.0.0.2 ping statistics --- 00:25:44.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.541 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:44.541 13:18:06 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:25:44.541 00:25:44.541 --- 10.0.0.1 ping statistics --- 00:25:44.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.541 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:44.541 13:18:06 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.541 13:18:06 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:44.541 13:18:06 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:44.541 13:18:06 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:45.913 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:45.913 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:45.913 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:45.913 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:45.913 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:45.913 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:45.913 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:45.913 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:45.913 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:45.913 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:45.913 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:45.913 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:45.913 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:45.913 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:45.913 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:45.913 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:45.913 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.913 13:18:07 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:45.913 13:18:07 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3946872 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:45.913 13:18:07 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3946872 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3946872 ']' 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.913 13:18:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:45.913 [2024-07-15 13:18:07.463110] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:25:45.913 [2024-07-15 13:18:07.463221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.913 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.913 [2024-07-15 13:18:07.529872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.170 [2024-07-15 13:18:07.644329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.170 [2024-07-15 13:18:07.644396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.170 [2024-07-15 13:18:07.644410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.170 [2024-07-15 13:18:07.644421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.170 [2024-07-15 13:18:07.644431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.170 [2024-07-15 13:18:07.644458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:46.170 13:18:07 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 13:18:07 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.170 13:18:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:46.170 13:18:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 [2024-07-15 13:18:07.780232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.170 13:18:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.170 13:18:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 ************************************ 00:25:46.170 START TEST fio_dif_1_default 00:25:46.170 ************************************ 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 bdev_null0 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 [2024-07-15 13:18:07.840521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:46.170 { 00:25:46.170 "params": { 00:25:46.170 "name": "Nvme$subsystem", 00:25:46.170 "trtype": "$TEST_TRANSPORT", 00:25:46.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.170 "adrfam": "ipv4", 00:25:46.170 "trsvcid": "$NVMF_PORT", 00:25:46.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.170 "hdgst": ${hdgst:-false}, 00:25:46.170 "ddgst": ${ddgst:-false} 00:25:46.170 }, 00:25:46.170 "method": "bdev_nvme_attach_controller" 00:25:46.170 } 00:25:46.170 EOF 00:25:46.170 )") 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:46.170 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:46.171 13:18:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:46.171 "params": { 00:25:46.171 "name": "Nvme0", 00:25:46.171 "trtype": "tcp", 00:25:46.171 "traddr": "10.0.0.2", 00:25:46.171 "adrfam": "ipv4", 00:25:46.171 "trsvcid": "4420", 00:25:46.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:46.171 "hdgst": false, 00:25:46.171 "ddgst": false 00:25:46.171 }, 00:25:46.171 "method": "bdev_nvme_attach_controller" 00:25:46.171 }' 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:46.428 13:18:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.428 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:46.428 fio-3.35 00:25:46.428 Starting 1 thread 00:25:46.685 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.884 00:25:58.884 filename0: (groupid=0, jobs=1): err= 0: pid=3947387: Mon Jul 15 13:18:18 2024 00:25:58.884 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:25:58.884 slat (nsec): min=4889, max=39216, avg=8774.84, stdev=3489.79 00:25:58.884 clat (usec): min=40894, max=46800, avg=41013.47, stdev=390.29 00:25:58.884 lat (usec): min=40901, max=46820, avg=41022.25, stdev=390.48 00:25:58.884 clat percentiles (usec): 00:25:58.884 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:58.884 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:58.884 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:58.884 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:25:58.884 | 99.99th=[46924] 00:25:58.884 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:25:58.884 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:25:58.884 lat (msec) : 50=100.00% 00:25:58.884 cpu : usr=89.73%, sys=10.01%, ctx=13, majf=0, minf=177 00:25:58.884 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.884 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.884 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:58.884 00:25:58.884 Run status group 0 (all jobs): 00:25:58.884 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:25:58.884 13:18:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 00:25:58.885 real 0m11.151s 00:25:58.885 user 0m10.092s 00:25:58.885 sys 0m1.302s 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:58.885 13:18:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 ************************************ 00:25:58.885 END TEST fio_dif_1_default 00:25:58.885 ************************************ 00:25:58.885 13:18:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:58.885 13:18:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:58.885 13:18:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:58.885 13:18:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.885 13:18:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 ************************************ 00:25:58.885 START TEST fio_dif_1_multi_subsystems 00:25:58.885 ************************************ 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 bdev_null0 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 [2024-07-15 13:18:19.033202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 bdev_null1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:58.885 { 00:25:58.885 "params": { 00:25:58.885 "name": "Nvme$subsystem", 00:25:58.885 "trtype": "$TEST_TRANSPORT", 00:25:58.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.885 "adrfam": "ipv4", 00:25:58.885 "trsvcid": "$NVMF_PORT", 00:25:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.885 "hdgst": ${hdgst:-false}, 00:25:58.885 "ddgst": ${ddgst:-false} 00:25:58.885 }, 00:25:58.885 "method": "bdev_nvme_attach_controller" 00:25:58.885 } 00:25:58.885 EOF 00:25:58.885 )") 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:58.885 { 00:25:58.885 "params": { 00:25:58.885 "name": "Nvme$subsystem", 00:25:58.885 "trtype": "$TEST_TRANSPORT", 00:25:58.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.885 "adrfam": "ipv4", 00:25:58.885 "trsvcid": "$NVMF_PORT", 00:25:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.885 "hdgst": ${hdgst:-false}, 00:25:58.885 "ddgst": ${ddgst:-false} 00:25:58.885 }, 00:25:58.885 "method": "bdev_nvme_attach_controller" 00:25:58.885 } 00:25:58.885 EOF 00:25:58.885 )") 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:58.885 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:58.885 "params": { 00:25:58.885 "name": "Nvme0", 00:25:58.885 "trtype": "tcp", 00:25:58.885 "traddr": "10.0.0.2", 00:25:58.885 "adrfam": "ipv4", 00:25:58.885 "trsvcid": "4420", 00:25:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.885 "hdgst": false, 00:25:58.885 "ddgst": false 00:25:58.885 }, 00:25:58.885 "method": "bdev_nvme_attach_controller" 00:25:58.886 },{ 00:25:58.886 "params": { 00:25:58.886 "name": "Nvme1", 00:25:58.886 "trtype": "tcp", 00:25:58.886 "traddr": "10.0.0.2", 00:25:58.886 "adrfam": "ipv4", 00:25:58.886 "trsvcid": "4420", 00:25:58.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.886 "hdgst": false, 00:25:58.886 "ddgst": false 00:25:58.886 }, 00:25:58.886 "method": "bdev_nvme_attach_controller" 00:25:58.886 }' 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:58.886 13:18:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.886 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:58.886 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:58.886 fio-3.35 00:25:58.886 Starting 2 threads 00:25:58.886 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.855 00:26:08.855 filename0: (groupid=0, jobs=1): err= 0: pid=3948898: Mon Jul 15 13:18:30 2024 00:26:08.855 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10027msec) 00:26:08.855 slat (nsec): min=6806, max=56987, avg=9532.06, stdev=3933.67 00:26:08.855 clat (usec): min=40875, max=43124, avg=41061.10, stdev=311.08 00:26:08.855 lat (usec): min=40883, max=43157, avg=41070.63, stdev=312.12 00:26:08.855 clat percentiles (usec): 00:26:08.855 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:08.855 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:08.855 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:26:08.855 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:26:08.855 | 99.99th=[43254] 00:26:08.855 bw ( KiB/s): min= 384, max= 416, per=49.83%, avg=388.80, stdev=11.72, samples=20 00:26:08.855 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:08.855 lat (msec) : 50=100.00% 00:26:08.855 cpu : usr=94.06%, sys=5.66%, ctx=18, majf=0, minf=104 00:26:08.855 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.855 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.855 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:08.855 filename1: (groupid=0, jobs=1): err= 0: pid=3948899: Mon Jul 15 13:18:30 2024 00:26:08.855 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:26:08.855 slat (nsec): min=6406, max=65306, avg=9676.33, stdev=4434.27 00:26:08.855 clat (usec): min=40870, max=43096, avg=41047.99, stdev=275.99 00:26:08.855 lat (usec): min=40880, max=43113, avg=41057.67, stdev=276.78 00:26:08.855 clat percentiles (usec): 00:26:08.855 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:08.855 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:08.855 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:26:08.855 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:26:08.855 | 99.99th=[43254] 00:26:08.855 bw ( KiB/s): min= 384, max= 416, per=49.83%, avg=388.80, stdev=11.72, samples=20 00:26:08.855 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:08.855 lat (msec) : 50=100.00% 00:26:08.855 cpu : usr=94.43%, sys=5.28%, ctx=15, majf=0, minf=170 00:26:08.855 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.855 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.855 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:08.855 00:26:08.855 Run status group 0 (all jobs): 00:26:08.855 READ: bw=779KiB/s (797kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10024-10027msec 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.855 00:26:08.855 real 0m11.361s 00:26:08.855 user 0m20.109s 00:26:08.855 sys 0m1.385s 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:08.855 13:18:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:08.855 ************************************ 00:26:08.855 END TEST fio_dif_1_multi_subsystems 00:26:08.855 ************************************ 00:26:08.855 13:18:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:08.855 13:18:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:08.855 13:18:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:08.855 13:18:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.855 13:18:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:08.855 ************************************ 00:26:08.855 START TEST fio_dif_rand_params 00:26:08.855 ************************************ 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:08.855 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.856 bdev_null0 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:08.856 [2024-07-15 13:18:30.450643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:08.856 { 00:26:08.856 "params": { 00:26:08.856 "name": "Nvme$subsystem", 00:26:08.856 "trtype": "$TEST_TRANSPORT", 00:26:08.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.856 "adrfam": "ipv4", 00:26:08.856 "trsvcid": "$NVMF_PORT", 00:26:08.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.856 "hdgst": ${hdgst:-false}, 00:26:08.856 "ddgst": ${ddgst:-false} 00:26:08.856 }, 00:26:08.856 "method": "bdev_nvme_attach_controller" 00:26:08.856 } 00:26:08.856 EOF 00:26:08.856 )") 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:08.856 "params": { 00:26:08.856 "name": "Nvme0", 00:26:08.856 "trtype": "tcp", 00:26:08.856 "traddr": "10.0.0.2", 00:26:08.856 "adrfam": "ipv4", 00:26:08.856 "trsvcid": "4420", 00:26:08.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:08.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:08.856 "hdgst": false, 00:26:08.856 "ddgst": false 00:26:08.856 }, 00:26:08.856 "method": "bdev_nvme_attach_controller" 00:26:08.856 }' 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:08.856 13:18:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.113 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:09.113 ... 00:26:09.113 fio-3.35 00:26:09.113 Starting 3 threads 00:26:09.113 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.671 00:26:15.671 filename0: (groupid=0, jobs=1): err= 0: pid=3950368: Mon Jul 15 13:18:36 2024 00:26:15.671 read: IOPS=221, BW=27.6MiB/s (29.0MB/s)(138MiB/5003msec) 00:26:15.671 slat (nsec): min=4968, max=42313, avg=16835.86, stdev=5149.20 00:26:15.671 clat (usec): min=3980, max=90252, avg=13547.01, stdev=11012.62 00:26:15.671 lat (usec): min=4000, max=90275, avg=13563.84, stdev=11012.86 00:26:15.671 clat percentiles (usec): 00:26:15.671 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 8586], 00:26:15.671 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11600], 00:26:15.671 | 70.00th=[12518], 80.00th=[13435], 90.00th=[15533], 95.00th=[50070], 00:26:15.671 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[90702], 00:26:15.671 | 99.99th=[90702] 00:26:15.671 bw ( KiB/s): min=19456, max=36096, per=35.12%, avg=28242.40, stdev=5593.09, samples=10 00:26:15.671 iops : min= 152, max= 282, avg=220.60, stdev=43.70, samples=10 00:26:15.671 lat (msec) : 4=0.09%, 10=42.59%, 20=50.09%, 50=2.08%, 100=5.15% 00:26:15.671 cpu : usr=92.96%, sys=5.94%, ctx=309, majf=0, minf=83 00:26:15.671 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.671 issued rwts: total=1106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.671 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.671 filename0: (groupid=0, jobs=1): err= 0: pid=3950369: Mon Jul 15 13:18:36 2024 00:26:15.671 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(124MiB/5042msec) 00:26:15.671 slat (usec): min=4, max=108, avg=15.44, stdev= 5.68 00:26:15.671 clat (usec): min=4694, max=55373, avg=15200.22, stdev=13198.64 00:26:15.671 lat (usec): min=4705, max=55406, avg=15215.67, stdev=13199.13 00:26:15.671 clat percentiles (usec): 00:26:15.671 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 7046], 20.00th=[ 8455], 00:26:15.671 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[11207], 60.00th=[11994], 00:26:15.671 | 70.00th=[12911], 80.00th=[14222], 90.00th=[49021], 95.00th=[51119], 00:26:15.671 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:26:15.671 | 99.99th=[55313] 00:26:15.671 bw ( KiB/s): min=17664, max=31488, per=31.48%, avg=25318.40, stdev=4522.59, samples=10 00:26:15.671 iops : min= 138, max= 246, avg=197.80, stdev=35.33, samples=10 00:26:15.671 lat (msec) : 10=39.05%, 20=49.55%, 50=3.63%, 100=7.77% 00:26:15.671 cpu : usr=94.25%, sys=5.34%, ctx=7, majf=0, minf=176 00:26:15.671 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.671 issued rwts: total=991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.671 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.671 filename0: (groupid=0, jobs=1): err= 0: pid=3950370: Mon Jul 15 13:18:36 2024 00:26:15.671 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(134MiB/5017msec) 00:26:15.671 slat (nsec): min=4820, max=37575, avg=15157.33, stdev=3852.34 00:26:15.672 clat (usec): min=5167, max=57618, avg=14032.98, stdev=12014.30 00:26:15.672 lat (usec): min=5180, max=57632, avg=14048.13, stdev=12014.48 00:26:15.672 clat percentiles (usec): 00:26:15.672 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 8160], 00:26:15.672 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10945], 60.00th=[11863], 00:26:15.672 | 70.00th=[12780], 80.00th=[13829], 90.00th=[16450], 95.00th=[51643], 00:26:15.672 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[57410], 00:26:15.672 | 99.99th=[57410] 00:26:15.672 bw ( KiB/s): min=19968, max=32256, per=34.00%, avg=27345.60, stdev=3911.24, samples=10 00:26:15.672 iops : min= 156, max= 252, avg=213.60, stdev=30.59, samples=10 00:26:15.672 lat (msec) : 10=44.07%, 20=47.25%, 50=1.59%, 100=7.10% 00:26:15.672 cpu : usr=93.80%, sys=5.70%, ctx=26, majf=0, minf=63 00:26:15.672 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.672 issued rwts: total=1071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.672 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.672 00:26:15.672 Run status group 0 (all jobs): 00:26:15.672 READ: bw=78.5MiB/s (82.4MB/s), 24.6MiB/s-27.6MiB/s (25.8MB/s-29.0MB/s), io=396MiB (415MB), run=5003-5042msec 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 bdev_null0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 [2024-07-15 13:18:36.613570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 bdev_null1 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 bdev_null2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.672 { 00:26:15.672 "params": { 00:26:15.672 "name": "Nvme$subsystem", 00:26:15.672 "trtype": "$TEST_TRANSPORT", 00:26:15.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.672 "adrfam": "ipv4", 00:26:15.672 "trsvcid": "$NVMF_PORT", 00:26:15.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.672 "hdgst": ${hdgst:-false}, 00:26:15.672 "ddgst": ${ddgst:-false} 00:26:15.672 }, 00:26:15.672 "method": "bdev_nvme_attach_controller" 00:26:15.672 } 00:26:15.672 EOF 00:26:15.672 )") 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:15.672 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.673 { 00:26:15.673 "params": { 00:26:15.673 "name": "Nvme$subsystem", 00:26:15.673 "trtype": "$TEST_TRANSPORT", 00:26:15.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.673 "adrfam": "ipv4", 00:26:15.673 "trsvcid": "$NVMF_PORT", 00:26:15.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.673 "hdgst": ${hdgst:-false}, 00:26:15.673 "ddgst": ${ddgst:-false} 00:26:15.673 }, 00:26:15.673 "method": "bdev_nvme_attach_controller" 00:26:15.673 } 00:26:15.673 EOF 00:26:15.673 )") 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.673 { 00:26:15.673 "params": { 00:26:15.673 "name": "Nvme$subsystem", 00:26:15.673 "trtype": "$TEST_TRANSPORT", 00:26:15.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.673 "adrfam": "ipv4", 00:26:15.673 "trsvcid": "$NVMF_PORT", 00:26:15.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.673 "hdgst": ${hdgst:-false}, 00:26:15.673 "ddgst": ${ddgst:-false} 00:26:15.673 }, 00:26:15.673 "method": "bdev_nvme_attach_controller" 00:26:15.673 } 00:26:15.673 EOF 00:26:15.673 )") 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:15.673 "params": { 00:26:15.673 "name": "Nvme0", 00:26:15.673 "trtype": "tcp", 00:26:15.673 "traddr": "10.0.0.2", 00:26:15.673 "adrfam": "ipv4", 00:26:15.673 "trsvcid": "4420", 00:26:15.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:15.673 "hdgst": false, 00:26:15.673 "ddgst": false 00:26:15.673 }, 00:26:15.673 "method": "bdev_nvme_attach_controller" 00:26:15.673 },{ 00:26:15.673 "params": { 00:26:15.673 "name": "Nvme1", 00:26:15.673 "trtype": "tcp", 00:26:15.673 "traddr": "10.0.0.2", 00:26:15.673 "adrfam": "ipv4", 00:26:15.673 "trsvcid": "4420", 00:26:15.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.673 "hdgst": false, 00:26:15.673 "ddgst": false 00:26:15.673 }, 00:26:15.673 "method": "bdev_nvme_attach_controller" 00:26:15.673 },{ 00:26:15.673 "params": { 00:26:15.673 "name": "Nvme2", 00:26:15.673 "trtype": "tcp", 00:26:15.673 "traddr": "10.0.0.2", 00:26:15.673 "adrfam": "ipv4", 00:26:15.673 "trsvcid": "4420", 00:26:15.673 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:15.673 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:15.673 "hdgst": false, 00:26:15.673 "ddgst": false 00:26:15.673 }, 00:26:15.673 "method": "bdev_nvme_attach_controller" 00:26:15.673 }' 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:15.673 13:18:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.673 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.673 ... 00:26:15.673 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.673 ... 00:26:15.673 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.673 ... 00:26:15.673 fio-3.35 00:26:15.673 Starting 24 threads 00:26:15.673 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.867 00:26:27.867 filename0: (groupid=0, jobs=1): err= 0: pid=3951175: Mon Jul 15 13:18:47 2024 00:26:27.867 read: IOPS=153, BW=616KiB/s (631kB/s)(6208KiB/10080msec) 00:26:27.867 slat (usec): min=8, max=112, avg=33.44, stdev=21.12 00:26:27.867 clat (msec): min=20, max=350, avg=103.63, stdev=98.40 00:26:27.867 lat (msec): min=20, max=350, avg=103.67, stdev=98.39 00:26:27.867 clat percentiles (msec): 00:26:27.867 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.867 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.867 | 70.00th=[ 197], 80.00th=[ 232], 90.00th=[ 245], 95.00th=[ 288], 00:26:27.867 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 351], 00:26:27.867 | 99.99th=[ 351] 00:26:27.867 bw ( KiB/s): min= 240, max= 1920, per=4.33%, avg=614.55, stdev=654.28, samples=20 00:26:27.867 iops : min= 60, max= 480, avg=153.60, stdev=163.51, samples=20 00:26:27.867 lat (msec) : 50=63.92%, 100=1.03%, 250=28.87%, 500=6.19% 00:26:27.867 cpu : usr=96.65%, sys=2.03%, ctx=73, majf=0, minf=59 00:26:27.867 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:27.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.867 filename0: (groupid=0, jobs=1): err= 0: pid=3951176: Mon Jul 15 13:18:47 2024 00:26:27.867 read: IOPS=158, BW=633KiB/s (648kB/s)(6400KiB/10115msec) 00:26:27.867 slat (nsec): min=8051, max=73771, avg=28284.86, stdev=18792.98 00:26:27.867 clat (msec): min=13, max=281, avg=100.54, stdev=90.90 00:26:27.867 lat (msec): min=14, max=281, avg=100.57, stdev=90.89 00:26:27.867 clat percentiles (msec): 00:26:27.867 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.867 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:26:27.867 | 70.00th=[ 194], 80.00th=[ 222], 90.00th=[ 239], 95.00th=[ 245], 00:26:27.867 | 99.00th=[ 259], 99.50th=[ 259], 99.90th=[ 284], 99.95th=[ 284], 00:26:27.867 | 99.99th=[ 284] 00:26:27.867 bw ( KiB/s): min= 256, max= 1920, per=4.46%, avg=633.60, stdev=654.19, samples=20 00:26:27.867 iops : min= 64, max= 480, avg=158.40, stdev=163.55, samples=20 00:26:27.867 lat (msec) : 20=1.00%, 50=63.00%, 250=34.88%, 500=1.12% 00:26:27.867 cpu : usr=98.31%, sys=1.27%, ctx=20, majf=0, minf=71 00:26:27.867 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:26:27.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.867 filename0: (groupid=0, jobs=1): err= 0: pid=3951177: Mon Jul 15 13:18:47 2024 00:26:27.867 read: IOPS=136, BW=547KiB/s (560kB/s)(5496KiB/10042msec) 00:26:27.867 slat (usec): min=8, max=128, avg=41.50, stdev=23.97 00:26:27.867 clat (msec): min=32, max=487, avg=116.54, stdev=134.23 00:26:27.867 lat (msec): min=32, max=487, avg=116.58, stdev=134.22 00:26:27.867 clat percentiles (msec): 00:26:27.867 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.867 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:26:27.867 | 70.00th=[ 73], 80.00th=[ 296], 90.00th=[ 321], 95.00th=[ 384], 00:26:27.867 | 99.00th=[ 430], 99.50th=[ 456], 99.90th=[ 489], 99.95th=[ 489], 00:26:27.867 | 99.99th=[ 489] 00:26:27.867 bw ( KiB/s): min= 128, max= 1920, per=3.83%, avg=543.20, stdev=670.99, samples=20 00:26:27.867 iops : min= 32, max= 480, avg=135.80, stdev=167.75, samples=20 00:26:27.867 lat (msec) : 50=69.87%, 100=1.16%, 250=2.33%, 500=26.64% 00:26:27.867 cpu : usr=98.10%, sys=1.50%, ctx=27, majf=0, minf=46 00:26:27.867 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:27.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.867 filename0: (groupid=0, jobs=1): err= 0: pid=3951178: Mon Jul 15 13:18:47 2024 00:26:27.867 read: IOPS=145, BW=582KiB/s (596kB/s)(5848KiB/10045msec) 00:26:27.867 slat (usec): min=7, max=136, avg=41.81, stdev=25.36 00:26:27.867 clat (msec): min=21, max=426, avg=109.50, stdev=112.24 00:26:27.867 lat (msec): min=21, max=426, avg=109.55, stdev=112.22 00:26:27.867 clat percentiles (msec): 00:26:27.867 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.867 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.867 | 70.00th=[ 192], 80.00th=[ 239], 90.00th=[ 292], 95.00th=[ 313], 00:26:27.867 | 99.00th=[ 351], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:26:27.867 | 99.99th=[ 426] 00:26:27.867 bw ( KiB/s): min= 128, max= 1920, per=4.07%, avg=578.40, stdev=659.05, samples=20 00:26:27.867 iops : min= 32, max= 480, avg=144.60, stdev=164.76, samples=20 00:26:27.867 lat (msec) : 50=65.39%, 100=1.78%, 250=17.92%, 500=14.91% 00:26:27.867 cpu : usr=97.72%, sys=1.71%, ctx=39, majf=0, minf=48 00:26:27.867 IO depths : 1=4.4%, 2=10.3%, 4=23.9%, 8=53.3%, 16=8.1%, 32=0.0%, >=64=0.0% 00:26:27.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.867 filename0: (groupid=0, jobs=1): err= 0: pid=3951179: Mon Jul 15 13:18:47 2024 00:26:27.867 read: IOPS=137, BW=548KiB/s (562kB/s)(5504KiB/10037msec) 00:26:27.867 slat (usec): min=8, max=146, avg=47.09, stdev=25.51 00:26:27.867 clat (msec): min=32, max=490, avg=116.26, stdev=131.87 00:26:27.867 lat (msec): min=32, max=490, avg=116.30, stdev=131.85 00:26:27.867 clat percentiles (msec): 00:26:27.867 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.867 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:26:27.867 | 70.00th=[ 73], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 372], 00:26:27.867 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 489], 99.95th=[ 489], 00:26:27.867 | 99.99th=[ 489] 00:26:27.867 bw ( KiB/s): min= 128, max= 1920, per=3.83%, avg=544.00, stdev=671.60, samples=20 00:26:27.867 iops : min= 32, max= 480, avg=136.00, stdev=167.90, samples=20 00:26:27.867 lat (msec) : 50=69.77%, 100=1.16%, 250=3.20%, 500=25.87% 00:26:27.867 cpu : usr=98.28%, sys=1.33%, ctx=12, majf=0, minf=44 00:26:27.867 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:27.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.867 filename0: (groupid=0, jobs=1): err= 0: pid=3951180: Mon Jul 15 13:18:47 2024 00:26:27.867 read: IOPS=146, BW=587KiB/s (601kB/s)(5904KiB/10060msec) 00:26:27.867 slat (usec): min=7, max=130, avg=43.90, stdev=20.97 00:26:27.867 clat (msec): min=20, max=455, avg=108.67, stdev=108.30 00:26:27.867 lat (msec): min=20, max=455, avg=108.71, stdev=108.29 00:26:27.867 clat percentiles (msec): 00:26:27.867 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.867 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:26:27.867 | 70.00th=[ 192], 80.00th=[ 239], 90.00th=[ 271], 95.00th=[ 305], 00:26:27.867 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 456], 99.95th=[ 456], 00:26:27.867 | 99.99th=[ 456] 00:26:27.867 bw ( KiB/s): min= 128, max= 1920, per=4.11%, avg=584.00, stdev=640.22, samples=20 00:26:27.867 iops : min= 32, max= 480, avg=146.00, stdev=160.06, samples=20 00:26:27.867 lat (msec) : 50=65.18%, 100=1.22%, 250=21.95%, 500=11.65% 00:26:27.867 cpu : usr=98.25%, sys=1.33%, ctx=18, majf=0, minf=50 00:26:27.867 IO depths : 1=4.4%, 2=10.4%, 4=23.9%, 8=53.3%, 16=8.1%, 32=0.0%, >=64=0.0% 00:26:27.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.867 issued rwts: total=1476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.867 filename0: (groupid=0, jobs=1): err= 0: pid=3951181: Mon Jul 15 13:18:47 2024 00:26:27.867 read: IOPS=148, BW=594KiB/s (608kB/s)(5976KiB/10059msec) 00:26:27.867 slat (nsec): min=8490, max=94723, avg=45130.38, stdev=16706.63 00:26:27.867 clat (msec): min=21, max=381, avg=107.35, stdev=106.01 00:26:27.867 lat (msec): min=21, max=381, avg=107.40, stdev=106.00 00:26:27.867 clat percentiles (msec): 00:26:27.867 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.867 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.867 | 70.00th=[ 222], 80.00th=[ 239], 90.00th=[ 266], 95.00th=[ 305], 00:26:27.867 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 380], 99.95th=[ 380], 00:26:27.867 | 99.99th=[ 380] 00:26:27.868 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=591.20, stdev=650.84, samples=20 00:26:27.868 iops : min= 32, max= 480, avg=147.80, stdev=162.71, samples=20 00:26:27.868 lat (msec) : 50=65.19%, 100=1.61%, 250=21.55%, 500=11.65% 00:26:27.868 cpu : usr=98.36%, sys=1.24%, ctx=20, majf=0, minf=51 00:26:27.868 IO depths : 1=4.6%, 2=10.3%, 4=23.4%, 8=53.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename0: (groupid=0, jobs=1): err= 0: pid=3951182: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=147, BW=592KiB/s (606kB/s)(5952KiB/10058msec) 00:26:27.868 slat (usec): min=8, max=115, avg=49.48, stdev=22.58 00:26:27.868 clat (msec): min=30, max=420, avg=107.70, stdev=106.22 00:26:27.868 lat (msec): min=30, max=420, avg=107.75, stdev=106.20 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.868 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.868 | 70.00th=[ 222], 80.00th=[ 239], 90.00th=[ 266], 95.00th=[ 300], 00:26:27.868 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 422], 99.95th=[ 422], 00:26:27.868 | 99.99th=[ 422] 00:26:27.868 bw ( KiB/s): min= 128, max= 1920, per=4.14%, avg=588.80, stdev=646.59, samples=20 00:26:27.868 iops : min= 32, max= 480, avg=147.20, stdev=161.65, samples=20 00:26:27.868 lat (msec) : 50=65.59%, 100=1.08%, 250=21.64%, 500=11.69% 00:26:27.868 cpu : usr=97.89%, sys=1.70%, ctx=14, majf=0, minf=58 00:26:27.868 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename1: (groupid=0, jobs=1): err= 0: pid=3951183: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=158, BW=635KiB/s (650kB/s)(6400KiB/10080msec) 00:26:27.868 slat (nsec): min=7947, max=65190, avg=14149.64, stdev=9844.28 00:26:27.868 clat (msec): min=8, max=259, avg=100.61, stdev=90.77 00:26:27.868 lat (msec): min=8, max=259, avg=100.63, stdev=90.76 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:26:27.868 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 38], 00:26:27.868 | 70.00th=[ 194], 80.00th=[ 226], 90.00th=[ 239], 95.00th=[ 245], 00:26:27.868 | 99.00th=[ 251], 99.50th=[ 259], 99.90th=[ 259], 99.95th=[ 259], 00:26:27.868 | 99.99th=[ 259] 00:26:27.868 bw ( KiB/s): min= 256, max= 1920, per=4.46%, avg=633.60, stdev=654.62, samples=20 00:26:27.868 iops : min= 64, max= 480, avg=158.40, stdev=163.66, samples=20 00:26:27.868 lat (msec) : 10=0.44%, 20=0.69%, 50=62.88%, 250=35.00%, 500=1.00% 00:26:27.868 cpu : usr=97.36%, sys=2.22%, ctx=26, majf=0, minf=45 00:26:27.868 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename1: (groupid=0, jobs=1): err= 0: pid=3951184: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=149, BW=598KiB/s (612kB/s)(6008KiB/10053msec) 00:26:27.868 slat (usec): min=8, max=135, avg=33.22, stdev=19.32 00:26:27.868 clat (msec): min=17, max=450, avg=106.86, stdev=107.66 00:26:27.868 lat (msec): min=17, max=450, avg=106.89, stdev=107.65 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.868 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 38], 00:26:27.868 | 70.00th=[ 178], 80.00th=[ 234], 90.00th=[ 266], 95.00th=[ 326], 00:26:27.868 | 99.00th=[ 363], 99.50th=[ 384], 99.90th=[ 451], 99.95th=[ 451], 00:26:27.868 | 99.99th=[ 451] 00:26:27.868 bw ( KiB/s): min= 128, max= 2016, per=4.18%, avg=594.40, stdev=668.43, samples=20 00:26:27.868 iops : min= 32, max= 504, avg=148.60, stdev=167.11, samples=20 00:26:27.868 lat (msec) : 20=0.80%, 50=64.31%, 100=1.07%, 250=23.30%, 500=10.52% 00:26:27.868 cpu : usr=97.31%, sys=1.79%, ctx=107, majf=0, minf=49 00:26:27.868 IO depths : 1=2.9%, 2=7.3%, 4=18.8%, 8=60.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=92.6%, 8=2.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename1: (groupid=0, jobs=1): err= 0: pid=3951185: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=147, BW=591KiB/s (605kB/s)(5936KiB/10051msec) 00:26:27.868 slat (usec): min=8, max=146, avg=40.46, stdev=25.17 00:26:27.868 clat (msec): min=30, max=475, avg=107.98, stdev=106.91 00:26:27.868 lat (msec): min=30, max=475, avg=108.02, stdev=106.90 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.868 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.868 | 70.00th=[ 201], 80.00th=[ 234], 90.00th=[ 259], 95.00th=[ 300], 00:26:27.868 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 477], 99.95th=[ 477], 00:26:27.868 | 99.99th=[ 477] 00:26:27.868 bw ( KiB/s): min= 128, max= 1920, per=4.14%, avg=587.20, stdev=647.00, samples=20 00:26:27.868 iops : min= 32, max= 480, avg=146.80, stdev=161.75, samples=20 00:26:27.868 lat (msec) : 50=64.69%, 100=1.08%, 250=23.85%, 500=10.38% 00:26:27.868 cpu : usr=97.51%, sys=1.72%, ctx=36, majf=0, minf=49 00:26:27.868 IO depths : 1=4.8%, 2=10.1%, 4=22.2%, 8=55.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename1: (groupid=0, jobs=1): err= 0: pid=3951186: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=151, BW=607KiB/s (621kB/s)(6104KiB/10064msec) 00:26:27.868 slat (usec): min=8, max=110, avg=37.82, stdev=18.48 00:26:27.868 clat (msec): min=20, max=369, avg=105.23, stdev=103.56 00:26:27.868 lat (msec): min=21, max=369, avg=105.26, stdev=103.55 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.868 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 36], 00:26:27.868 | 70.00th=[ 222], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 300], 00:26:27.868 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 372], 99.95th=[ 372], 00:26:27.868 | 99.99th=[ 372] 00:26:27.868 bw ( KiB/s): min= 128, max= 1920, per=4.26%, avg=604.00, stdev=665.53, samples=20 00:26:27.868 iops : min= 32, max= 480, avg=151.00, stdev=166.38, samples=20 00:26:27.868 lat (msec) : 50=64.88%, 100=1.57%, 250=24.25%, 500=9.31% 00:26:27.868 cpu : usr=98.23%, sys=1.23%, ctx=29, majf=0, minf=54 00:26:27.868 IO depths : 1=2.5%, 2=8.1%, 4=23.1%, 8=56.2%, 16=10.1%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename1: (groupid=0, jobs=1): err= 0: pid=3951187: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=150, BW=603KiB/s (618kB/s)(6068KiB/10058msec) 00:26:27.868 slat (usec): min=8, max=112, avg=33.55, stdev=21.78 00:26:27.868 clat (msec): min=21, max=380, avg=105.81, stdev=103.73 00:26:27.868 lat (msec): min=21, max=380, avg=105.84, stdev=103.73 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:26:27.868 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.868 | 70.00th=[ 205], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 300], 00:26:27.868 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 380], 99.95th=[ 380], 00:26:27.868 | 99.99th=[ 380] 00:26:27.868 bw ( KiB/s): min= 128, max= 1920, per=4.23%, avg=600.40, stdev=659.59, samples=20 00:26:27.868 iops : min= 32, max= 480, avg=150.10, stdev=164.90, samples=20 00:26:27.868 lat (msec) : 50=65.66%, 100=0.59%, 250=24.39%, 500=9.36% 00:26:27.868 cpu : usr=96.86%, sys=2.11%, ctx=236, majf=0, minf=57 00:26:27.868 IO depths : 1=5.2%, 2=11.1%, 4=23.7%, 8=52.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename1: (groupid=0, jobs=1): err= 0: pid=3951188: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=154, BW=619KiB/s (633kB/s)(6224KiB/10063msec) 00:26:27.868 slat (usec): min=8, max=137, avg=34.40, stdev=24.30 00:26:27.868 clat (msec): min=29, max=366, avg=102.87, stdev=94.92 00:26:27.868 lat (msec): min=29, max=366, avg=102.91, stdev=94.91 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:26:27.868 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:26:27.868 | 70.00th=[ 192], 80.00th=[ 226], 90.00th=[ 241], 95.00th=[ 249], 00:26:27.868 | 99.00th=[ 330], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:26:27.868 | 99.99th=[ 368] 00:26:27.868 bw ( KiB/s): min= 128, max= 1920, per=4.34%, avg=616.00, stdev=638.26, samples=20 00:26:27.868 iops : min= 32, max= 480, avg=154.00, stdev=159.57, samples=20 00:26:27.868 lat (msec) : 50=62.72%, 100=1.03%, 250=32.52%, 500=3.73% 00:26:27.868 cpu : usr=98.06%, sys=1.51%, ctx=16, majf=0, minf=42 00:26:27.868 IO depths : 1=3.8%, 2=8.8%, 4=21.1%, 8=57.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:26:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.868 issued rwts: total=1556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.868 filename1: (groupid=0, jobs=1): err= 0: pid=3951189: Mon Jul 15 13:18:47 2024 00:26:27.868 read: IOPS=155, BW=622KiB/s (637kB/s)(6272KiB/10079msec) 00:26:27.868 slat (usec): min=7, max=110, avg=32.78, stdev=20.45 00:26:27.868 clat (msec): min=20, max=362, avg=102.37, stdev=94.07 00:26:27.868 lat (msec): min=20, max=362, avg=102.40, stdev=94.06 00:26:27.868 clat percentiles (msec): 00:26:27.868 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.868 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:26:27.868 | 70.00th=[ 192], 80.00th=[ 232], 90.00th=[ 241], 95.00th=[ 245], 00:26:27.868 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 363], 99.95th=[ 363], 00:26:27.868 | 99.99th=[ 363] 00:26:27.868 bw ( KiB/s): min= 208, max= 1920, per=4.37%, avg=620.95, stdev=646.55, samples=20 00:26:27.868 iops : min= 52, max= 480, avg=155.20, stdev=161.57, samples=20 00:26:27.868 lat (msec) : 50=63.27%, 100=1.02%, 250=32.53%, 500=3.19% 00:26:27.869 cpu : usr=97.74%, sys=1.76%, ctx=30, majf=0, minf=50 00:26:27.869 IO depths : 1=4.4%, 2=9.7%, 4=22.1%, 8=55.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename1: (groupid=0, jobs=1): err= 0: pid=3951190: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=137, BW=548KiB/s (561kB/s)(5504KiB/10042msec) 00:26:27.869 slat (usec): min=8, max=128, avg=49.03, stdev=24.68 00:26:27.869 clat (msec): min=31, max=477, avg=116.29, stdev=132.01 00:26:27.869 lat (msec): min=32, max=477, avg=116.34, stdev=132.01 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:26:27.869 | 70.00th=[ 73], 80.00th=[ 292], 90.00th=[ 317], 95.00th=[ 368], 00:26:27.869 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 477], 99.95th=[ 477], 00:26:27.869 | 99.99th=[ 477] 00:26:27.869 bw ( KiB/s): min= 128, max= 1920, per=3.83%, avg=544.00, stdev=670.76, samples=20 00:26:27.869 iops : min= 32, max= 480, avg=136.00, stdev=167.69, samples=20 00:26:27.869 lat (msec) : 50=69.77%, 100=1.16%, 250=3.05%, 500=26.02% 00:26:27.869 cpu : usr=98.23%, sys=1.36%, ctx=21, majf=0, minf=51 00:26:27.869 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename2: (groupid=0, jobs=1): err= 0: pid=3951191: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=153, BW=613KiB/s (628kB/s)(6176KiB/10075msec) 00:26:27.869 slat (usec): min=7, max=166, avg=42.59, stdev=21.68 00:26:27.869 clat (msec): min=17, max=378, avg=104.03, stdev=102.23 00:26:27.869 lat (msec): min=17, max=378, avg=104.08, stdev=102.22 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.869 | 70.00th=[ 203], 80.00th=[ 236], 90.00th=[ 247], 95.00th=[ 300], 00:26:27.869 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 380], 99.95th=[ 380], 00:26:27.869 | 99.99th=[ 380] 00:26:27.869 bw ( KiB/s): min= 144, max= 1920, per=4.30%, avg=611.20, stdev=665.97, samples=20 00:26:27.869 iops : min= 36, max= 480, avg=152.80, stdev=166.49, samples=20 00:26:27.869 lat (msec) : 20=1.04%, 50=65.28%, 250=24.48%, 500=9.20% 00:26:27.869 cpu : usr=97.22%, sys=1.85%, ctx=79, majf=0, minf=74 00:26:27.869 IO depths : 1=4.5%, 2=9.2%, 4=20.3%, 8=57.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename2: (groupid=0, jobs=1): err= 0: pid=3951192: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=147, BW=591KiB/s (606kB/s)(5952KiB/10063msec) 00:26:27.869 slat (usec): min=8, max=145, avg=45.13, stdev=16.93 00:26:27.869 clat (msec): min=11, max=440, avg=107.79, stdev=106.47 00:26:27.869 lat (msec): min=11, max=440, avg=107.84, stdev=106.45 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.869 | 70.00th=[ 222], 80.00th=[ 239], 90.00th=[ 266], 95.00th=[ 305], 00:26:27.869 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 439], 99.95th=[ 439], 00:26:27.869 | 99.99th=[ 439] 00:26:27.869 bw ( KiB/s): min= 128, max= 1936, per=4.14%, avg=588.80, stdev=649.27, samples=20 00:26:27.869 iops : min= 32, max= 484, avg=147.20, stdev=162.32, samples=20 00:26:27.869 lat (msec) : 20=0.27%, 50=65.05%, 100=1.34%, 250=21.64%, 500=11.69% 00:26:27.869 cpu : usr=97.88%, sys=1.65%, ctx=26, majf=0, minf=48 00:26:27.869 IO depths : 1=3.0%, 2=9.1%, 4=24.6%, 8=53.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename2: (groupid=0, jobs=1): err= 0: pid=3951193: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=160, BW=641KiB/s (656kB/s)(6464KiB/10086msec) 00:26:27.869 slat (usec): min=7, max=264, avg=38.92, stdev=26.56 00:26:27.869 clat (msec): min=8, max=315, avg=99.50, stdev=90.95 00:26:27.869 lat (msec): min=8, max=315, avg=99.53, stdev=90.93 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.869 | 70.00th=[ 194], 80.00th=[ 222], 90.00th=[ 239], 95.00th=[ 245], 00:26:27.869 | 99.00th=[ 251], 99.50th=[ 259], 99.90th=[ 317], 99.95th=[ 317], 00:26:27.869 | 99.99th=[ 317] 00:26:27.869 bw ( KiB/s): min= 256, max= 1920, per=4.51%, avg=640.00, stdev=666.63, samples=20 00:26:27.869 iops : min= 64, max= 480, avg=160.00, stdev=166.66, samples=20 00:26:27.869 lat (msec) : 10=0.99%, 20=1.11%, 50=62.25%, 250=34.65%, 500=0.99% 00:26:27.869 cpu : usr=97.02%, sys=1.87%, ctx=146, majf=0, minf=73 00:26:27.869 IO depths : 1=3.8%, 2=10.0%, 4=24.8%, 8=52.7%, 16=8.7%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename2: (groupid=0, jobs=1): err= 0: pid=3951194: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=138, BW=555KiB/s (568kB/s)(5568KiB/10037msec) 00:26:27.869 slat (usec): min=8, max=109, avg=40.48, stdev=17.31 00:26:27.869 clat (msec): min=30, max=476, avg=115.01, stdev=127.23 00:26:27.869 lat (msec): min=30, max=476, avg=115.05, stdev=127.22 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 36], 00:26:27.869 | 70.00th=[ 81], 80.00th=[ 275], 90.00th=[ 317], 95.00th=[ 351], 00:26:27.869 | 99.00th=[ 414], 99.50th=[ 439], 99.90th=[ 477], 99.95th=[ 477], 00:26:27.869 | 99.99th=[ 477] 00:26:27.869 bw ( KiB/s): min= 128, max= 1920, per=3.87%, avg=550.40, stdev=666.90, samples=20 00:26:27.869 iops : min= 32, max= 480, avg=137.60, stdev=166.72, samples=20 00:26:27.869 lat (msec) : 50=68.97%, 100=1.15%, 250=6.18%, 500=23.71% 00:26:27.869 cpu : usr=97.36%, sys=1.71%, ctx=53, majf=0, minf=49 00:26:27.869 IO depths : 1=5.4%, 2=11.6%, 4=24.8%, 8=51.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename2: (groupid=0, jobs=1): err= 0: pid=3951195: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=146, BW=586KiB/s (600kB/s)(5888KiB/10053msec) 00:26:27.869 slat (usec): min=7, max=145, avg=43.03, stdev=23.01 00:26:27.869 clat (msec): min=29, max=468, avg=108.92, stdev=108.92 00:26:27.869 lat (msec): min=29, max=468, avg=108.96, stdev=108.90 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.869 | 70.00th=[ 192], 80.00th=[ 239], 90.00th=[ 271], 95.00th=[ 300], 00:26:27.869 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 468], 99.95th=[ 468], 00:26:27.869 | 99.99th=[ 468] 00:26:27.869 bw ( KiB/s): min= 144, max= 1936, per=4.10%, avg=582.40, stdev=648.89, samples=20 00:26:27.869 iops : min= 36, max= 484, avg=145.60, stdev=162.22, samples=20 00:26:27.869 lat (msec) : 50=65.22%, 100=1.09%, 250=22.01%, 500=11.68% 00:26:27.869 cpu : usr=98.19%, sys=1.38%, ctx=19, majf=0, minf=48 00:26:27.869 IO depths : 1=2.9%, 2=9.1%, 4=24.7%, 8=53.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename2: (groupid=0, jobs=1): err= 0: pid=3951196: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=139, BW=558KiB/s (572kB/s)(5608KiB/10043msec) 00:26:27.869 slat (usec): min=8, max=115, avg=35.60, stdev=18.04 00:26:27.869 clat (msec): min=19, max=482, avg=114.35, stdev=132.75 00:26:27.869 lat (msec): min=19, max=482, avg=114.39, stdev=132.74 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 36], 00:26:27.869 | 70.00th=[ 58], 80.00th=[ 296], 90.00th=[ 321], 95.00th=[ 359], 00:26:27.869 | 99.00th=[ 430], 99.50th=[ 460], 99.90th=[ 481], 99.95th=[ 481], 00:26:27.869 | 99.99th=[ 481] 00:26:27.869 bw ( KiB/s): min= 128, max= 2112, per=3.90%, avg=554.40, stdev=692.04, samples=20 00:26:27.869 iops : min= 32, max= 528, avg=138.60, stdev=173.01, samples=20 00:26:27.869 lat (msec) : 20=2.92%, 50=66.12%, 100=2.28%, 250=2.28%, 500=26.39% 00:26:27.869 cpu : usr=98.02%, sys=1.52%, ctx=34, majf=0, minf=86 00:26:27.869 IO depths : 1=2.4%, 2=7.1%, 4=19.4%, 8=60.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=92.8%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 issued rwts: total=1402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.869 filename2: (groupid=0, jobs=1): err= 0: pid=3951197: Mon Jul 15 13:18:47 2024 00:26:27.869 read: IOPS=152, BW=609KiB/s (624kB/s)(6132KiB/10063msec) 00:26:27.869 slat (usec): min=8, max=129, avg=44.66, stdev=20.52 00:26:27.869 clat (msec): min=21, max=377, avg=104.47, stdev=100.72 00:26:27.869 lat (msec): min=21, max=377, avg=104.51, stdev=100.70 00:26:27.869 clat percentiles (msec): 00:26:27.869 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.869 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.869 | 70.00th=[ 192], 80.00th=[ 234], 90.00th=[ 247], 95.00th=[ 300], 00:26:27.869 | 99.00th=[ 313], 99.50th=[ 317], 99.90th=[ 380], 99.95th=[ 380], 00:26:27.869 | 99.99th=[ 380] 00:26:27.869 bw ( KiB/s): min= 240, max= 1920, per=4.27%, avg=606.80, stdev=648.66, samples=20 00:26:27.869 iops : min= 60, max= 480, avg=151.70, stdev=162.16, samples=20 00:26:27.869 lat (msec) : 50=64.97%, 100=0.59%, 250=26.61%, 500=7.83% 00:26:27.869 cpu : usr=98.08%, sys=1.48%, ctx=22, majf=0, minf=37 00:26:27.869 IO depths : 1=3.7%, 2=9.2%, 4=22.5%, 8=55.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:26:27.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.869 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.870 issued rwts: total=1533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.870 filename2: (groupid=0, jobs=1): err= 0: pid=3951198: Mon Jul 15 13:18:47 2024 00:26:27.870 read: IOPS=148, BW=594KiB/s (608kB/s)(5976KiB/10064msec) 00:26:27.870 slat (usec): min=8, max=144, avg=50.14, stdev=22.81 00:26:27.870 clat (msec): min=18, max=394, avg=107.35, stdev=106.70 00:26:27.870 lat (msec): min=18, max=394, avg=107.40, stdev=106.69 00:26:27.870 clat percentiles (msec): 00:26:27.870 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:27.870 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 37], 00:26:27.870 | 70.00th=[ 226], 80.00th=[ 239], 90.00th=[ 266], 95.00th=[ 300], 00:26:27.870 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 397], 99.95th=[ 397], 00:26:27.870 | 99.99th=[ 397] 00:26:27.870 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=591.20, stdev=652.17, samples=20 00:26:27.870 iops : min= 32, max= 480, avg=147.80, stdev=163.04, samples=20 00:26:27.870 lat (msec) : 20=0.27%, 50=64.79%, 100=1.74%, 250=21.69%, 500=11.51% 00:26:27.870 cpu : usr=97.73%, sys=1.67%, ctx=36, majf=0, minf=69 00:26:27.870 IO depths : 1=4.0%, 2=9.8%, 4=23.4%, 8=54.2%, 16=8.6%, 32=0.0%, >=64=0.0% 00:26:27.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.870 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.870 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.870 00:26:27.870 Run status group 0 (all jobs): 00:26:27.870 READ: bw=13.9MiB/s (14.5MB/s), 547KiB/s-641KiB/s (560kB/s-656kB/s), io=140MiB (147MB), run=10037-10115msec 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 bdev_null0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 [2024-07-15 13:18:48.369849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 bdev_null1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.870 { 00:26:27.870 "params": { 00:26:27.870 "name": "Nvme$subsystem", 00:26:27.870 "trtype": "$TEST_TRANSPORT", 00:26:27.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.870 "adrfam": "ipv4", 00:26:27.870 "trsvcid": "$NVMF_PORT", 00:26:27.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.870 "hdgst": ${hdgst:-false}, 00:26:27.870 "ddgst": ${ddgst:-false} 00:26:27.870 }, 00:26:27.870 "method": "bdev_nvme_attach_controller" 00:26:27.870 } 00:26:27.870 EOF 00:26:27.870 )") 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:27.870 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.871 { 00:26:27.871 "params": { 00:26:27.871 "name": "Nvme$subsystem", 00:26:27.871 "trtype": "$TEST_TRANSPORT", 00:26:27.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.871 "adrfam": "ipv4", 00:26:27.871 "trsvcid": "$NVMF_PORT", 00:26:27.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.871 "hdgst": ${hdgst:-false}, 00:26:27.871 "ddgst": ${ddgst:-false} 00:26:27.871 }, 00:26:27.871 "method": "bdev_nvme_attach_controller" 00:26:27.871 } 00:26:27.871 EOF 00:26:27.871 )") 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.871 "params": { 00:26:27.871 "name": "Nvme0", 00:26:27.871 "trtype": "tcp", 00:26:27.871 "traddr": "10.0.0.2", 00:26:27.871 "adrfam": "ipv4", 00:26:27.871 "trsvcid": "4420", 00:26:27.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.871 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:27.871 "hdgst": false, 00:26:27.871 "ddgst": false 00:26:27.871 }, 00:26:27.871 "method": "bdev_nvme_attach_controller" 00:26:27.871 },{ 00:26:27.871 "params": { 00:26:27.871 "name": "Nvme1", 00:26:27.871 "trtype": "tcp", 00:26:27.871 "traddr": "10.0.0.2", 00:26:27.871 "adrfam": "ipv4", 00:26:27.871 "trsvcid": "4420", 00:26:27.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.871 "hdgst": false, 00:26:27.871 "ddgst": false 00:26:27.871 }, 00:26:27.871 "method": "bdev_nvme_attach_controller" 00:26:27.871 }' 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:27.871 13:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.871 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:27.871 ... 00:26:27.871 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:27.871 ... 00:26:27.871 fio-3.35 00:26:27.871 Starting 4 threads 00:26:27.871 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.167 00:26:33.167 filename0: (groupid=0, jobs=1): err= 0: pid=3952575: Mon Jul 15 13:18:54 2024 00:26:33.167 read: IOPS=1936, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5003msec) 00:26:33.167 slat (nsec): min=4406, max=62233, avg=14449.70, stdev=6531.30 00:26:33.167 clat (usec): min=1000, max=12168, avg=4083.04, stdev=585.96 00:26:33.167 lat (usec): min=1012, max=12182, avg=4097.49, stdev=586.21 00:26:33.167 clat percentiles (usec): 00:26:33.167 | 1.00th=[ 2900], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3752], 00:26:33.167 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4080], 00:26:33.167 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 5604], 00:26:33.167 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 7242], 99.95th=[ 8356], 00:26:33.167 | 99.99th=[12125] 00:26:33.167 bw ( KiB/s): min=14960, max=16384, per=25.26%, avg=15499.00, stdev=406.81, samples=10 00:26:33.167 iops : min= 1870, max= 2048, avg=1937.30, stdev=50.88, samples=10 00:26:33.167 lat (msec) : 2=0.11%, 4=44.14%, 10=55.73%, 20=0.02% 00:26:33.167 cpu : usr=95.52%, sys=4.02%, ctx=10, majf=0, minf=55 00:26:33.167 IO depths : 1=0.1%, 2=6.0%, 4=67.5%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:33.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.167 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.167 issued rwts: total=9690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:33.167 filename0: (groupid=0, jobs=1): err= 0: pid=3952576: Mon Jul 15 13:18:54 2024 00:26:33.167 read: IOPS=1925, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5001msec) 00:26:33.167 slat (usec): min=4, max=393, avg=15.78, stdev= 6.91 00:26:33.167 clat (usec): min=777, max=7607, avg=4105.81, stdev=642.86 00:26:33.167 lat (usec): min=801, max=7628, avg=4121.58, stdev=642.73 00:26:33.167 clat percentiles (usec): 00:26:33.167 | 1.00th=[ 2900], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3752], 00:26:33.167 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4080], 00:26:33.167 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4752], 95.00th=[ 5669], 00:26:33.167 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7373], 99.95th=[ 7439], 00:26:33.167 | 99.99th=[ 7635] 00:26:33.167 bw ( KiB/s): min=15008, max=15808, per=25.14%, avg=15429.11, stdev=295.07, samples=9 00:26:33.167 iops : min= 1876, max= 1976, avg=1928.56, stdev=36.90, samples=9 00:26:33.167 lat (usec) : 1000=0.05% 00:26:33.167 lat (msec) : 2=0.29%, 4=48.42%, 10=51.24% 00:26:33.167 cpu : usr=94.60%, sys=4.82%, ctx=10, majf=0, minf=34 00:26:33.167 IO depths : 1=0.1%, 2=3.7%, 4=68.7%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:33.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.167 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.167 issued rwts: total=9630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:33.167 filename1: (groupid=0, jobs=1): err= 0: pid=3952577: Mon Jul 15 13:18:54 2024 00:26:33.167 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5002msec) 00:26:33.167 slat (nsec): min=3875, max=53080, avg=15187.34, stdev=6502.25 00:26:33.167 clat (usec): min=895, max=7158, avg=4168.62, stdev=566.24 00:26:33.167 lat (usec): min=909, max=7167, avg=4183.80, stdev=565.69 00:26:33.167 clat percentiles (usec): 00:26:33.167 | 1.00th=[ 2999], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3851], 00:26:33.167 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4113], 00:26:33.167 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5604], 00:26:33.167 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 6718], 99.95th=[ 6915], 00:26:33.167 | 99.99th=[ 7177] 00:26:33.167 bw ( KiB/s): min=14336, max=15616, per=24.75%, avg=15189.33, stdev=411.59, samples=9 00:26:33.167 iops : min= 1792, max= 1952, avg=1898.67, stdev=51.45, samples=9 00:26:33.167 lat (usec) : 1000=0.01% 00:26:33.167 lat (msec) : 2=0.12%, 4=36.31%, 10=63.57% 00:26:33.167 cpu : usr=91.88%, sys=5.84%, ctx=257, majf=0, minf=44 00:26:33.167 IO depths : 1=0.1%, 2=5.7%, 4=66.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:33.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.167 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.167 issued rwts: total=9489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:33.167 filename1: (groupid=0, jobs=1): err= 0: pid=3952578: Mon Jul 15 13:18:54 2024 00:26:33.167 read: IOPS=1912, BW=14.9MiB/s (15.7MB/s)(74.8MiB/5003msec) 00:26:33.167 slat (nsec): min=4211, max=62241, avg=12875.14, stdev=5694.77 00:26:33.167 clat (usec): min=720, max=8837, avg=4141.29, stdev=674.45 00:26:33.167 lat (usec): min=733, max=8861, avg=4154.16, stdev=674.45 00:26:33.167 clat percentiles (usec): 00:26:33.167 | 1.00th=[ 2868], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3752], 00:26:33.167 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4080], 00:26:33.167 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 5211], 95.00th=[ 5800], 00:26:33.167 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7111], 99.95th=[ 8848], 00:26:33.167 | 99.99th=[ 8848] 00:26:33.167 bw ( KiB/s): min=14304, max=16096, per=24.93%, avg=15302.40, stdev=575.96, samples=10 00:26:33.168 iops : min= 1788, max= 2012, avg=1912.80, stdev=72.00, samples=10 00:26:33.168 lat (usec) : 750=0.01% 00:26:33.168 lat (msec) : 2=0.02%, 4=43.40%, 10=56.57% 00:26:33.168 cpu : usr=95.40%, sys=4.14%, ctx=21, majf=0, minf=32 00:26:33.168 IO depths : 1=0.1%, 2=6.6%, 4=65.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:33.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.168 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.168 issued rwts: total=9569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.168 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:33.168 00:26:33.168 Run status group 0 (all jobs): 00:26:33.168 READ: bw=59.9MiB/s (62.8MB/s), 14.8MiB/s-15.1MiB/s (15.5MB/s-15.9MB/s), io=300MiB (314MB), run=5001-5003msec 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 00:26:33.168 real 0m24.287s 00:26:33.168 user 4m33.722s 00:26:33.168 sys 0m6.645s 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 ************************************ 00:26:33.168 END TEST fio_dif_rand_params 00:26:33.168 ************************************ 00:26:33.168 13:18:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:33.168 13:18:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:33.168 13:18:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:33.168 13:18:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 ************************************ 00:26:33.168 START TEST fio_dif_digest 00:26:33.168 ************************************ 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 bdev_null0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.168 [2024-07-15 13:18:54.789191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.168 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.168 { 00:26:33.168 "params": { 00:26:33.168 "name": "Nvme$subsystem", 00:26:33.168 "trtype": "$TEST_TRANSPORT", 00:26:33.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.169 "adrfam": "ipv4", 00:26:33.169 "trsvcid": "$NVMF_PORT", 00:26:33.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.169 "hdgst": ${hdgst:-false}, 00:26:33.169 "ddgst": ${ddgst:-false} 00:26:33.169 }, 00:26:33.169 "method": "bdev_nvme_attach_controller" 00:26:33.169 } 00:26:33.169 EOF 00:26:33.169 )") 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:33.169 "params": { 00:26:33.169 "name": "Nvme0", 00:26:33.169 "trtype": "tcp", 00:26:33.169 "traddr": "10.0.0.2", 00:26:33.169 "adrfam": "ipv4", 00:26:33.169 "trsvcid": "4420", 00:26:33.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.169 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:33.169 "hdgst": true, 00:26:33.169 "ddgst": true 00:26:33.169 }, 00:26:33.169 "method": "bdev_nvme_attach_controller" 00:26:33.169 }' 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:33.169 13:18:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.427 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:33.427 ... 00:26:33.427 fio-3.35 00:26:33.427 Starting 3 threads 00:26:33.427 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.615 00:26:45.615 filename0: (groupid=0, jobs=1): err= 0: pid=3953453: Mon Jul 15 13:19:05 2024 00:26:45.615 read: IOPS=192, BW=24.1MiB/s (25.3MB/s)(241MiB/10004msec) 00:26:45.615 slat (nsec): min=4641, max=38576, avg=15674.29, stdev=3486.17 00:26:45.615 clat (usec): min=8249, max=59781, avg=15545.11, stdev=3908.43 00:26:45.615 lat (usec): min=8272, max=59797, avg=15560.79, stdev=3908.56 00:26:45.615 clat percentiles (usec): 00:26:45.615 | 1.00th=[10028], 5.00th=[12911], 10.00th=[13698], 20.00th=[14353], 00:26:45.615 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15664], 00:26:45.615 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:26:45.615 | 99.00th=[19530], 99.50th=[55313], 99.90th=[58983], 99.95th=[60031], 00:26:45.615 | 99.99th=[60031] 00:26:45.615 bw ( KiB/s): min=21504, max=26368, per=30.99%, avg=24652.80, stdev=1157.15, samples=20 00:26:45.615 iops : min= 168, max= 206, avg=192.60, stdev= 9.04, samples=20 00:26:45.615 lat (msec) : 10=1.14%, 20=97.93%, 50=0.16%, 100=0.78% 00:26:45.615 cpu : usr=90.40%, sys=7.99%, ctx=445, majf=0, minf=140 00:26:45.615 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.615 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.615 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:45.615 filename0: (groupid=0, jobs=1): err= 0: pid=3953454: Mon Jul 15 13:19:05 2024 00:26:45.615 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(270MiB/10045msec) 00:26:45.615 slat (nsec): min=6308, max=37311, avg=15114.13, stdev=3098.77 00:26:45.615 clat (usec): min=7782, max=58185, avg=13919.16, stdev=3902.37 00:26:45.615 lat (usec): min=7796, max=58203, avg=13934.28, stdev=3902.62 00:26:45.615 clat percentiles (usec): 00:26:45.615 | 1.00th=[ 9503], 5.00th=[11469], 10.00th=[12125], 20.00th=[12780], 00:26:45.615 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:26:45.615 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:26:45.615 | 99.00th=[17433], 99.50th=[55313], 99.90th=[57410], 99.95th=[57934], 00:26:45.615 | 99.99th=[57934] 00:26:45.615 bw ( KiB/s): min=24832, max=29696, per=34.71%, avg=27609.60, stdev=1509.44, samples=20 00:26:45.615 iops : min= 194, max= 232, avg=215.70, stdev=11.79, samples=20 00:26:45.615 lat (msec) : 10=2.27%, 20=96.94%, 50=0.05%, 100=0.74% 00:26:45.615 cpu : usr=91.25%, sys=7.97%, ctx=211, majf=0, minf=116 00:26:45.615 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.615 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.615 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:45.615 filename0: (groupid=0, jobs=1): err= 0: pid=3953455: Mon Jul 15 13:19:05 2024 00:26:45.615 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(270MiB/10046msec) 00:26:45.615 slat (nsec): min=4301, max=48407, avg=13984.37, stdev=1919.43 00:26:45.615 clat (usec): min=8420, max=56038, avg=13940.41, stdev=2373.73 00:26:45.615 lat (usec): min=8434, max=56052, avg=13954.39, stdev=2373.81 00:26:45.615 clat percentiles (usec): 00:26:45.615 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[12256], 20.00th=[13042], 00:26:45.615 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:26:45.615 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15401], 95.00th=[15926], 00:26:45.615 | 99.00th=[16909], 99.50th=[17695], 99.90th=[54264], 99.95th=[54789], 00:26:45.615 | 99.99th=[55837] 00:26:45.615 bw ( KiB/s): min=25088, max=29440, per=34.65%, avg=27561.10, stdev=986.07, samples=20 00:26:45.615 iops : min= 196, max= 230, avg=215.30, stdev= 7.71, samples=20 00:26:45.615 lat (msec) : 10=3.53%, 20=96.24%, 50=0.05%, 100=0.19% 00:26:45.615 cpu : usr=92.52%, sys=7.01%, ctx=22, majf=0, minf=130 00:26:45.615 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.615 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.615 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:45.615 00:26:45.615 Run status group 0 (all jobs): 00:26:45.615 READ: bw=77.7MiB/s (81.5MB/s), 24.1MiB/s-26.9MiB/s (25.3MB/s-28.2MB/s), io=780MiB (818MB), run=10004-10046msec 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.615 00:26:45.615 real 0m11.134s 00:26:45.615 user 0m28.668s 00:26:45.615 sys 0m2.579s 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.615 13:19:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.615 ************************************ 00:26:45.615 END TEST fio_dif_digest 00:26:45.615 ************************************ 00:26:45.615 13:19:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:45.615 13:19:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:45.615 13:19:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.615 rmmod nvme_tcp 00:26:45.615 rmmod nvme_fabrics 00:26:45.615 rmmod nvme_keyring 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3946872 ']' 00:26:45.615 13:19:05 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3946872 00:26:45.615 13:19:05 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3946872 ']' 00:26:45.615 13:19:05 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3946872 00:26:45.615 13:19:05 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:45.615 13:19:05 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.615 13:19:05 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3946872 00:26:45.615 13:19:06 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:45.615 13:19:06 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:45.615 13:19:06 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3946872' 00:26:45.615 killing process with pid 3946872 00:26:45.615 13:19:06 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3946872 00:26:45.615 13:19:06 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3946872 00:26:45.615 13:19:06 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:45.615 13:19:06 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:45.615 Waiting for block devices as requested 00:26:45.615 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:45.874 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:45.874 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:46.132 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:46.132 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:46.132 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:46.132 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:46.391 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:46.391 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:46.391 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:46.391 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:46.649 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:46.649 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:46.650 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:46.650 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:46.908 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:46.908 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:47.167 13:19:08 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.167 13:19:08 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.167 13:19:08 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.167 13:19:08 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.167 13:19:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.167 13:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:47.167 13:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.070 13:19:10 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.070 00:26:49.070 real 1m6.582s 00:26:49.070 user 6m29.659s 00:26:49.070 sys 0m18.336s 00:26:49.070 13:19:10 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.070 13:19:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:49.070 ************************************ 00:26:49.070 END TEST nvmf_dif 00:26:49.070 ************************************ 00:26:49.070 13:19:10 -- common/autotest_common.sh@1142 -- # return 0 00:26:49.070 13:19:10 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:49.070 13:19:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:49.070 13:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.070 13:19:10 -- common/autotest_common.sh@10 -- # set +x 00:26:49.070 ************************************ 00:26:49.070 START TEST nvmf_abort_qd_sizes 00:26:49.070 ************************************ 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:49.070 * Looking for test storage... 00:26:49.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:49.070 13:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.328 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.328 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.328 13:19:10 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.328 13:19:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.251 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:26:51.252 00:26:51.252 --- 10.0.0.2 ping statistics --- 00:26:51.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.252 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:26:51.252 00:26:51.252 --- 10.0.0.1 ping statistics --- 00:26:51.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.252 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:51.252 13:19:12 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:52.186 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:52.186 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:52.186 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:52.186 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:52.186 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:52.186 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:52.186 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:52.186 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:52.186 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:53.121 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3958248 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3958248 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3958248 ']' 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.380 13:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.380 [2024-07-15 13:19:14.970128] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:26:53.380 [2024-07-15 13:19:14.970232] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.380 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.380 [2024-07-15 13:19:15.034421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.638 [2024-07-15 13:19:15.145988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.638 [2024-07-15 13:19:15.146038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.638 [2024-07-15 13:19:15.146060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.638 [2024-07-15 13:19:15.146078] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.638 [2024-07-15 13:19:15.146092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.638 [2024-07-15 13:19:15.146169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.638 [2024-07-15 13:19:15.146282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.638 [2024-07-15 13:19:15.146347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.638 [2024-07-15 13:19:15.146355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:53.638 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.639 13:19:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:53.896 ************************************ 00:26:53.896 START TEST spdk_target_abort 00:26:53.896 ************************************ 00:26:53.896 13:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:53.896 13:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:53.896 13:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:53.896 13:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.896 13:19:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.209 spdk_targetn1 00:26:57.209 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.209 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.209 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.210 [2024-07-15 13:19:18.185379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.210 [2024-07-15 13:19:18.217615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:57.210 13:19:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:57.210 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.489 Initializing NVMe Controllers 00:27:00.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:00.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:00.489 Initialization complete. Launching workers. 00:27:00.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10087, failed: 0 00:27:00.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 8872 00:27:00.489 success 795, unsuccess 420, failed 0 00:27:00.489 13:19:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:00.489 13:19:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:00.489 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.763 Initializing NVMe Controllers 00:27:03.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:03.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:03.763 Initialization complete. Launching workers. 00:27:03.763 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8662, failed: 0 00:27:03.763 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7435 00:27:03.763 success 341, unsuccess 886, failed 0 00:27:03.763 13:19:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:03.763 13:19:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.763 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.287 Initializing NVMe Controllers 00:27:06.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:06.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:06.287 Initialization complete. Launching workers. 00:27:06.287 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29864, failed: 0 00:27:06.287 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2715, failed to submit 27149 00:27:06.287 success 508, unsuccess 2207, failed 0 00:27:06.287 13:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:06.287 13:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.287 13:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.287 13:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.287 13:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:06.287 13:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.287 13:19:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3958248 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3958248 ']' 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3958248 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3958248 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3958248' 00:27:07.658 killing process with pid 3958248 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3958248 00:27:07.658 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3958248 00:27:07.917 00:27:07.917 real 0m14.229s 00:27:07.917 user 0m53.799s 00:27:07.917 sys 0m2.641s 00:27:07.917 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.917 13:19:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.917 ************************************ 00:27:07.917 END TEST spdk_target_abort 00:27:07.917 ************************************ 00:27:07.917 13:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:07.917 13:19:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:07.917 13:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:07.917 13:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.917 13:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:08.175 ************************************ 00:27:08.175 START TEST kernel_target_abort 00:27:08.175 ************************************ 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:08.175 13:19:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:09.106 Waiting for block devices as requested 00:27:09.106 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:09.363 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:09.363 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:09.363 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:09.621 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:09.621 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:09.621 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:09.621 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:09.879 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:09.879 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:09.879 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:09.879 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:10.137 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:10.137 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:10.137 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:10.394 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:10.394 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:10.395 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:10.652 No valid GPT data, bailing 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:10.652 00:27:10.652 Discovery Log Number of Records 2, Generation counter 2 00:27:10.652 =====Discovery Log Entry 0====== 00:27:10.652 trtype: tcp 00:27:10.652 adrfam: ipv4 00:27:10.652 subtype: current discovery subsystem 00:27:10.652 treq: not specified, sq flow control disable supported 00:27:10.652 portid: 1 00:27:10.652 trsvcid: 4420 00:27:10.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:10.652 traddr: 10.0.0.1 00:27:10.652 eflags: none 00:27:10.652 sectype: none 00:27:10.652 =====Discovery Log Entry 1====== 00:27:10.652 trtype: tcp 00:27:10.652 adrfam: ipv4 00:27:10.652 subtype: nvme subsystem 00:27:10.652 treq: not specified, sq flow control disable supported 00:27:10.652 portid: 1 00:27:10.652 trsvcid: 4420 00:27:10.652 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:10.652 traddr: 10.0.0.1 00:27:10.652 eflags: none 00:27:10.652 sectype: none 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:10.652 13:19:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:10.652 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.926 Initializing NVMe Controllers 00:27:13.926 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:13.926 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:13.926 Initialization complete. Launching workers. 00:27:13.926 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31511, failed: 0 00:27:13.926 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31511, failed to submit 0 00:27:13.926 success 0, unsuccess 31511, failed 0 00:27:13.926 13:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:13.926 13:19:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:13.926 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.242 Initializing NVMe Controllers 00:27:17.242 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:17.242 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:17.242 Initialization complete. Launching workers. 00:27:17.242 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64822, failed: 0 00:27:17.242 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16358, failed to submit 48464 00:27:17.242 success 0, unsuccess 16358, failed 0 00:27:17.242 13:19:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:17.242 13:19:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:17.242 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.522 Initializing NVMe Controllers 00:27:20.522 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:20.522 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:20.522 Initialization complete. Launching workers. 00:27:20.522 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63249, failed: 0 00:27:20.522 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15794, failed to submit 47455 00:27:20.522 success 0, unsuccess 15794, failed 0 00:27:20.522 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:20.523 13:19:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:21.090 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:21.090 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:21.090 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:21.090 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:21.090 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:21.090 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:21.090 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:21.090 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:21.090 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:22.027 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:22.285 00:27:22.285 real 0m14.189s 00:27:22.285 user 0m5.027s 00:27:22.285 sys 0m3.376s 00:27:22.285 13:19:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.285 13:19:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.285 ************************************ 00:27:22.285 END TEST kernel_target_abort 00:27:22.285 ************************************ 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.285 rmmod nvme_tcp 00:27:22.285 rmmod nvme_fabrics 00:27:22.285 rmmod nvme_keyring 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3958248 ']' 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3958248 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3958248 ']' 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3958248 00:27:22.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3958248) - No such process 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3958248 is not found' 00:27:22.285 Process with pid 3958248 is not found 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:22.285 13:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:23.219 Waiting for block devices as requested 00:27:23.477 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:23.477 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:23.477 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:23.736 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:23.736 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:23.736 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:23.994 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:23.994 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:23.994 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:23.994 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:24.253 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.253 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.253 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.253 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.512 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.512 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.512 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:24.512 13:19:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:24.512 13:19:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:24.512 13:19:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:24.512 13:19:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:24.512 13:19:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.512 13:19:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:24.512 13:19:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.045 13:19:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.045 00:27:27.045 real 0m37.536s 00:27:27.045 user 1m0.735s 00:27:27.045 sys 0m9.239s 00:27:27.045 13:19:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.045 13:19:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:27.045 ************************************ 00:27:27.045 END TEST nvmf_abort_qd_sizes 00:27:27.045 ************************************ 00:27:27.045 13:19:48 -- common/autotest_common.sh@1142 -- # return 0 00:27:27.045 13:19:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:27.045 13:19:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:27.045 13:19:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.045 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:27:27.045 ************************************ 00:27:27.045 START TEST keyring_file 00:27:27.045 ************************************ 00:27:27.045 13:19:48 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:27.045 * Looking for test storage... 00:27:27.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:27.045 13:19:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:27.045 13:19:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.045 13:19:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:27.045 13:19:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.045 13:19:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.045 13:19:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.045 13:19:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.045 13:19:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.045 13:19:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.046 13:19:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.046 13:19:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.046 13:19:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.046 13:19:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.046 13:19:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.046 13:19:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.046 13:19:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:27.046 13:19:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GOZbIDZ3Pd 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GOZbIDZ3Pd 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GOZbIDZ3Pd 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GOZbIDZ3Pd 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JHswQ8ZK0C 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.046 13:19:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JHswQ8ZK0C 00:27:27.046 13:19:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JHswQ8ZK0C 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.JHswQ8ZK0C 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=3964013 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:27.046 13:19:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3964013 00:27:27.046 13:19:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3964013 ']' 00:27:27.046 13:19:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.046 13:19:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.046 13:19:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.046 13:19:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.046 13:19:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.046 [2024-07-15 13:19:48.501398] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:27:27.046 [2024-07-15 13:19:48.501497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964013 ] 00:27:27.046 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.046 [2024-07-15 13:19:48.560638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.046 [2024-07-15 13:19:48.679086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:27.976 13:19:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.976 [2024-07-15 13:19:49.440707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.976 null0 00:27:27.976 [2024-07-15 13:19:49.472730] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:27.976 [2024-07-15 13:19:49.473151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:27.976 [2024-07-15 13:19:49.480733] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.976 13:19:49 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.976 [2024-07-15 13:19:49.492763] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:27.976 request: 00:27:27.976 { 00:27:27.976 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:27.976 "secure_channel": false, 00:27:27.976 "listen_address": { 00:27:27.976 "trtype": "tcp", 00:27:27.976 "traddr": "127.0.0.1", 00:27:27.976 "trsvcid": "4420" 00:27:27.976 }, 00:27:27.976 "method": "nvmf_subsystem_add_listener", 00:27:27.976 "req_id": 1 00:27:27.976 } 00:27:27.976 Got JSON-RPC error response 00:27:27.976 response: 00:27:27.976 { 00:27:27.976 "code": -32602, 00:27:27.976 "message": "Invalid parameters" 00:27:27.976 } 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:27.976 13:19:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:27.977 13:19:49 keyring_file -- keyring/file.sh@46 -- # bperfpid=3964149 00:27:27.977 13:19:49 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:27.977 13:19:49 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3964149 /var/tmp/bperf.sock 00:27:27.977 13:19:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3964149 ']' 00:27:27.977 13:19:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:27.977 13:19:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.977 13:19:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:27.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:27.977 13:19:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.977 13:19:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:27.977 [2024-07-15 13:19:49.540454] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:27:27.977 [2024-07-15 13:19:49.540514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964149 ] 00:27:27.977 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.977 [2024-07-15 13:19:49.600308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.234 [2024-07-15 13:19:49.717079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.234 13:19:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.234 13:19:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:28.234 13:19:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:28.234 13:19:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:28.491 13:19:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JHswQ8ZK0C 00:27:28.491 13:19:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JHswQ8ZK0C 00:27:28.748 13:19:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:28.748 13:19:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:28.748 13:19:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:28.748 13:19:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:28.748 13:19:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:29.004 13:19:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.GOZbIDZ3Pd == \/\t\m\p\/\t\m\p\.\G\O\Z\b\I\D\Z\3\P\d ]] 00:27:29.004 13:19:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:29.004 13:19:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:29.004 13:19:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.004 13:19:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.004 13:19:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:29.261 13:19:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JHswQ8ZK0C == \/\t\m\p\/\t\m\p\.\J\H\s\w\Q\8\Z\K\0\C ]] 00:27:29.261 13:19:50 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:29.261 13:19:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:29.261 13:19:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.261 13:19:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.261 13:19:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.261 13:19:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:29.517 13:19:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:29.517 13:19:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:29.517 13:19:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:29.517 13:19:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.517 13:19:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.517 13:19:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.517 13:19:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:29.773 13:19:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:29.773 13:19:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:29.773 13:19:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:30.031 [2024-07-15 13:19:51.567670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:30.031 nvme0n1 00:27:30.031 13:19:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:30.031 13:19:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:30.031 13:19:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.031 13:19:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.031 13:19:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.031 13:19:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.288 13:19:51 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:30.288 13:19:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:30.288 13:19:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:30.288 13:19:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.289 13:19:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.289 13:19:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.289 13:19:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.546 13:19:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:30.546 13:19:52 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:30.803 Running I/O for 1 seconds... 00:27:31.734 00:27:31.734 Latency(us) 00:27:31.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.734 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:31.734 nvme0n1 : 1.02 4763.78 18.61 0.00 0.00 26541.42 9806.13 35923.44 00:27:31.734 =================================================================================================================== 00:27:31.734 Total : 4763.78 18.61 0.00 0.00 26541.42 9806.13 35923.44 00:27:31.734 { 00:27:31.734 "core_count": 1, 00:27:31.734 "test_results": [ 00:27:31.734 { 00:27:31.734 "job": "nvme0n1", 00:27:31.734 "test_status": "finished", 00:27:31.734 "core_mask": "0x2", 00:27:31.735 "workload": "randrw", 00:27:31.735 "percentage": 50, 00:27:31.735 "queue_depth": 128, 00:27:31.735 "io_size": 4096, 00:27:31.735 "runtime": 1.0248160362243652, 00:27:31.735 "io_per_second": 4763.781986229723, 00:27:31.735 "MiB_per_second": 18.608523383709855, 00:27:31.735 "fails_per_second": 0.0, 00:27:31.735 "timeout_per_second": 0.0, 00:27:31.735 "average_latency_us": 26541.41874338083, 00:27:31.735 "min_latency_us": 9806.127407407408, 00:27:31.735 "max_latency_us": 35923.43703703704 00:27:31.735 } 00:27:31.735 ] 00:27:31.735 } 00:27:31.735 13:19:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:31.735 13:19:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:31.991 13:19:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:31.991 13:19:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:31.991 13:19:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.991 13:19:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.991 13:19:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:31.991 13:19:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.249 13:19:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:32.249 13:19:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:32.249 13:19:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:32.249 13:19:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.249 13:19:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.249 13:19:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.249 13:19:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:32.506 13:19:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:32.506 13:19:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.506 13:19:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:32.506 13:19:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.506 13:19:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:32.506 13:19:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.506 13:19:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:32.506 13:19:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.506 13:19:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.506 13:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:32.799 [2024-07-15 13:19:54.300503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:32.799 [2024-07-15 13:19:54.301121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a39a0 (107): Transport endpoint is not connected 00:27:32.799 [2024-07-15 13:19:54.302114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a39a0 (9): Bad file descriptor 00:27:32.799 [2024-07-15 13:19:54.303113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:32.799 [2024-07-15 13:19:54.303139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:32.799 [2024-07-15 13:19:54.303162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:32.799 request: 00:27:32.799 { 00:27:32.799 "name": "nvme0", 00:27:32.799 "trtype": "tcp", 00:27:32.799 "traddr": "127.0.0.1", 00:27:32.799 "adrfam": "ipv4", 00:27:32.799 "trsvcid": "4420", 00:27:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.799 "prchk_reftag": false, 00:27:32.799 "prchk_guard": false, 00:27:32.799 "hdgst": false, 00:27:32.799 "ddgst": false, 00:27:32.799 "psk": "key1", 00:27:32.799 "method": "bdev_nvme_attach_controller", 00:27:32.799 "req_id": 1 00:27:32.799 } 00:27:32.799 Got JSON-RPC error response 00:27:32.799 response: 00:27:32.799 { 00:27:32.799 "code": -5, 00:27:32.799 "message": "Input/output error" 00:27:32.799 } 00:27:32.799 13:19:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:32.799 13:19:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:32.799 13:19:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:32.799 13:19:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:32.799 13:19:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:32.799 13:19:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:32.799 13:19:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.799 13:19:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.799 13:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.799 13:19:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:33.057 13:19:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:33.057 13:19:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:33.057 13:19:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:33.057 13:19:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.057 13:19:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.057 13:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.057 13:19:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:33.315 13:19:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:33.315 13:19:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:33.315 13:19:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:33.571 13:19:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:33.571 13:19:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:33.828 13:19:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:33.828 13:19:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:33.828 13:19:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.086 13:19:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:34.086 13:19:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.GOZbIDZ3Pd 00:27:34.086 13:19:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:34.086 13:19:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:34.086 13:19:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:34.086 13:19:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:34.086 13:19:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.086 13:19:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:34.086 13:19:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.086 13:19:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:34.086 13:19:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:34.344 [2024-07-15 13:19:55.788073] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GOZbIDZ3Pd': 0100660 00:27:34.344 [2024-07-15 13:19:55.788106] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:34.344 request: 00:27:34.344 { 00:27:34.344 "name": "key0", 00:27:34.344 "path": "/tmp/tmp.GOZbIDZ3Pd", 00:27:34.344 "method": "keyring_file_add_key", 00:27:34.344 "req_id": 1 00:27:34.344 } 00:27:34.344 Got JSON-RPC error response 00:27:34.344 response: 00:27:34.344 { 00:27:34.344 "code": -1, 00:27:34.344 "message": "Operation not permitted" 00:27:34.344 } 00:27:34.344 13:19:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:34.344 13:19:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.344 13:19:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.344 13:19:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.344 13:19:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.GOZbIDZ3Pd 00:27:34.344 13:19:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:34.344 13:19:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GOZbIDZ3Pd 00:27:34.602 13:19:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.GOZbIDZ3Pd 00:27:34.602 13:19:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:34.602 13:19:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:34.602 13:19:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.602 13:19:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.602 13:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.602 13:19:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.860 13:19:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:34.860 13:19:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.860 13:19:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:34.860 13:19:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.860 13:19:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:34.860 13:19:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.860 13:19:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:34.860 13:19:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.860 13:19:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.860 13:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.860 [2024-07-15 13:19:56.546200] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GOZbIDZ3Pd': No such file or directory 00:27:34.860 [2024-07-15 13:19:56.546242] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:34.860 [2024-07-15 13:19:56.546270] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:34.860 [2024-07-15 13:19:56.546280] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:34.860 [2024-07-15 13:19:56.546291] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:34.860 request: 00:27:34.860 { 00:27:34.860 "name": "nvme0", 00:27:34.860 "trtype": "tcp", 00:27:34.860 "traddr": "127.0.0.1", 00:27:34.860 "adrfam": "ipv4", 00:27:34.860 "trsvcid": "4420", 00:27:34.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:34.860 "prchk_reftag": false, 00:27:34.860 "prchk_guard": false, 00:27:34.860 "hdgst": false, 00:27:34.860 "ddgst": false, 00:27:34.860 "psk": "key0", 00:27:34.860 "method": "bdev_nvme_attach_controller", 00:27:34.860 "req_id": 1 00:27:34.860 } 00:27:34.860 Got JSON-RPC error response 00:27:34.860 response: 00:27:34.860 { 00:27:34.860 "code": -19, 00:27:34.860 "message": "No such device" 00:27:34.860 } 00:27:35.118 13:19:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:35.118 13:19:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.118 13:19:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.118 13:19:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.118 13:19:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:35.118 13:19:56 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dHOpsyekUO 00:27:35.118 13:19:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:35.118 13:19:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:35.118 13:19:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:35.118 13:19:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:35.118 13:19:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:35.118 13:19:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:35.118 13:19:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:35.375 13:19:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dHOpsyekUO 00:27:35.375 13:19:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dHOpsyekUO 00:27:35.375 13:19:56 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dHOpsyekUO 00:27:35.375 13:19:56 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dHOpsyekUO 00:27:35.375 13:19:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dHOpsyekUO 00:27:35.633 13:19:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.633 13:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.891 nvme0n1 00:27:35.891 13:19:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:35.891 13:19:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:35.891 13:19:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.891 13:19:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.891 13:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.891 13:19:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.148 13:19:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:36.148 13:19:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:36.148 13:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:36.404 13:19:57 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:36.404 13:19:57 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:36.404 13:19:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.404 13:19:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.404 13:19:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.660 13:19:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:36.660 13:19:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:36.661 13:19:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.661 13:19:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.661 13:19:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.661 13:19:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.661 13:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.917 13:19:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:36.917 13:19:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:36.917 13:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:37.175 13:19:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:37.175 13:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.175 13:19:58 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:37.432 13:19:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:37.432 13:19:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dHOpsyekUO 00:27:37.432 13:19:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dHOpsyekUO 00:27:37.690 13:19:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JHswQ8ZK0C 00:27:37.690 13:19:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JHswQ8ZK0C 00:27:37.690 13:19:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.690 13:19:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:38.255 nvme0n1 00:27:38.255 13:19:59 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:38.255 13:19:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:38.513 13:20:00 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:38.513 "subsystems": [ 00:27:38.513 { 00:27:38.513 "subsystem": "keyring", 00:27:38.513 "config": [ 00:27:38.513 { 00:27:38.513 "method": "keyring_file_add_key", 00:27:38.513 "params": { 00:27:38.513 "name": "key0", 00:27:38.513 "path": "/tmp/tmp.dHOpsyekUO" 00:27:38.513 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "keyring_file_add_key", 00:27:38.514 "params": { 00:27:38.514 "name": "key1", 00:27:38.514 "path": "/tmp/tmp.JHswQ8ZK0C" 00:27:38.514 } 00:27:38.514 } 00:27:38.514 ] 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "subsystem": "iobuf", 00:27:38.514 "config": [ 00:27:38.514 { 00:27:38.514 "method": "iobuf_set_options", 00:27:38.514 "params": { 00:27:38.514 "small_pool_count": 8192, 00:27:38.514 "large_pool_count": 1024, 00:27:38.514 "small_bufsize": 8192, 00:27:38.514 "large_bufsize": 135168 00:27:38.514 } 00:27:38.514 } 00:27:38.514 ] 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "subsystem": "sock", 00:27:38.514 "config": [ 00:27:38.514 { 00:27:38.514 "method": "sock_set_default_impl", 00:27:38.514 "params": { 00:27:38.514 "impl_name": "posix" 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "sock_impl_set_options", 00:27:38.514 "params": { 00:27:38.514 "impl_name": "ssl", 00:27:38.514 "recv_buf_size": 4096, 00:27:38.514 "send_buf_size": 4096, 00:27:38.514 "enable_recv_pipe": true, 00:27:38.514 "enable_quickack": false, 00:27:38.514 "enable_placement_id": 0, 00:27:38.514 "enable_zerocopy_send_server": true, 00:27:38.514 "enable_zerocopy_send_client": false, 00:27:38.514 "zerocopy_threshold": 0, 00:27:38.514 "tls_version": 0, 00:27:38.514 "enable_ktls": false 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "sock_impl_set_options", 00:27:38.514 "params": { 00:27:38.514 "impl_name": "posix", 00:27:38.514 "recv_buf_size": 2097152, 00:27:38.514 "send_buf_size": 2097152, 00:27:38.514 "enable_recv_pipe": true, 00:27:38.514 "enable_quickack": false, 00:27:38.514 "enable_placement_id": 0, 00:27:38.514 "enable_zerocopy_send_server": true, 00:27:38.514 "enable_zerocopy_send_client": false, 00:27:38.514 "zerocopy_threshold": 0, 00:27:38.514 "tls_version": 0, 00:27:38.514 "enable_ktls": false 00:27:38.514 } 00:27:38.514 } 00:27:38.514 ] 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "subsystem": "vmd", 00:27:38.514 "config": [] 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "subsystem": "accel", 00:27:38.514 "config": [ 00:27:38.514 { 00:27:38.514 "method": "accel_set_options", 00:27:38.514 "params": { 00:27:38.514 "small_cache_size": 128, 00:27:38.514 "large_cache_size": 16, 00:27:38.514 "task_count": 2048, 00:27:38.514 "sequence_count": 2048, 00:27:38.514 "buf_count": 2048 00:27:38.514 } 00:27:38.514 } 00:27:38.514 ] 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "subsystem": "bdev", 00:27:38.514 "config": [ 00:27:38.514 { 00:27:38.514 "method": "bdev_set_options", 00:27:38.514 "params": { 00:27:38.514 "bdev_io_pool_size": 65535, 00:27:38.514 "bdev_io_cache_size": 256, 00:27:38.514 "bdev_auto_examine": true, 00:27:38.514 "iobuf_small_cache_size": 128, 00:27:38.514 "iobuf_large_cache_size": 16 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "bdev_raid_set_options", 00:27:38.514 "params": { 00:27:38.514 "process_window_size_kb": 1024 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "bdev_iscsi_set_options", 00:27:38.514 "params": { 00:27:38.514 "timeout_sec": 30 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "bdev_nvme_set_options", 00:27:38.514 "params": { 00:27:38.514 "action_on_timeout": "none", 00:27:38.514 "timeout_us": 0, 00:27:38.514 "timeout_admin_us": 0, 00:27:38.514 "keep_alive_timeout_ms": 10000, 00:27:38.514 "arbitration_burst": 0, 00:27:38.514 "low_priority_weight": 0, 00:27:38.514 "medium_priority_weight": 0, 00:27:38.514 "high_priority_weight": 0, 00:27:38.514 "nvme_adminq_poll_period_us": 10000, 00:27:38.514 "nvme_ioq_poll_period_us": 0, 00:27:38.514 "io_queue_requests": 512, 00:27:38.514 "delay_cmd_submit": true, 00:27:38.514 "transport_retry_count": 4, 00:27:38.514 "bdev_retry_count": 3, 00:27:38.514 "transport_ack_timeout": 0, 00:27:38.514 "ctrlr_loss_timeout_sec": 0, 00:27:38.514 "reconnect_delay_sec": 0, 00:27:38.514 "fast_io_fail_timeout_sec": 0, 00:27:38.514 "disable_auto_failback": false, 00:27:38.514 "generate_uuids": false, 00:27:38.514 "transport_tos": 0, 00:27:38.514 "nvme_error_stat": false, 00:27:38.514 "rdma_srq_size": 0, 00:27:38.514 "io_path_stat": false, 00:27:38.514 "allow_accel_sequence": false, 00:27:38.514 "rdma_max_cq_size": 0, 00:27:38.514 "rdma_cm_event_timeout_ms": 0, 00:27:38.514 "dhchap_digests": [ 00:27:38.514 "sha256", 00:27:38.514 "sha384", 00:27:38.514 "sha512" 00:27:38.514 ], 00:27:38.514 "dhchap_dhgroups": [ 00:27:38.514 "null", 00:27:38.514 "ffdhe2048", 00:27:38.514 "ffdhe3072", 00:27:38.514 "ffdhe4096", 00:27:38.514 "ffdhe6144", 00:27:38.514 "ffdhe8192" 00:27:38.514 ] 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "bdev_nvme_attach_controller", 00:27:38.514 "params": { 00:27:38.514 "name": "nvme0", 00:27:38.514 "trtype": "TCP", 00:27:38.514 "adrfam": "IPv4", 00:27:38.514 "traddr": "127.0.0.1", 00:27:38.514 "trsvcid": "4420", 00:27:38.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.514 "prchk_reftag": false, 00:27:38.514 "prchk_guard": false, 00:27:38.514 "ctrlr_loss_timeout_sec": 0, 00:27:38.514 "reconnect_delay_sec": 0, 00:27:38.514 "fast_io_fail_timeout_sec": 0, 00:27:38.514 "psk": "key0", 00:27:38.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.514 "hdgst": false, 00:27:38.514 "ddgst": false 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "bdev_nvme_set_hotplug", 00:27:38.514 "params": { 00:27:38.514 "period_us": 100000, 00:27:38.514 "enable": false 00:27:38.514 } 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "method": "bdev_wait_for_examine" 00:27:38.514 } 00:27:38.514 ] 00:27:38.514 }, 00:27:38.514 { 00:27:38.514 "subsystem": "nbd", 00:27:38.514 "config": [] 00:27:38.514 } 00:27:38.514 ] 00:27:38.514 }' 00:27:38.514 13:20:00 keyring_file -- keyring/file.sh@114 -- # killprocess 3964149 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3964149 ']' 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3964149 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3964149 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3964149' 00:27:38.514 killing process with pid 3964149 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@967 -- # kill 3964149 00:27:38.514 Received shutdown signal, test time was about 1.000000 seconds 00:27:38.514 00:27:38.514 Latency(us) 00:27:38.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.514 =================================================================================================================== 00:27:38.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.514 13:20:00 keyring_file -- common/autotest_common.sh@972 -- # wait 3964149 00:27:38.773 13:20:00 keyring_file -- keyring/file.sh@117 -- # bperfpid=3965487 00:27:38.773 13:20:00 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3965487 /var/tmp/bperf.sock 00:27:38.773 13:20:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3965487 ']' 00:27:38.773 13:20:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.773 13:20:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.773 13:20:00 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:38.773 13:20:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.773 13:20:00 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:38.773 "subsystems": [ 00:27:38.773 { 00:27:38.773 "subsystem": "keyring", 00:27:38.773 "config": [ 00:27:38.773 { 00:27:38.773 "method": "keyring_file_add_key", 00:27:38.773 "params": { 00:27:38.773 "name": "key0", 00:27:38.773 "path": "/tmp/tmp.dHOpsyekUO" 00:27:38.773 } 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "method": "keyring_file_add_key", 00:27:38.773 "params": { 00:27:38.773 "name": "key1", 00:27:38.773 "path": "/tmp/tmp.JHswQ8ZK0C" 00:27:38.773 } 00:27:38.773 } 00:27:38.773 ] 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "subsystem": "iobuf", 00:27:38.773 "config": [ 00:27:38.773 { 00:27:38.773 "method": "iobuf_set_options", 00:27:38.773 "params": { 00:27:38.773 "small_pool_count": 8192, 00:27:38.773 "large_pool_count": 1024, 00:27:38.773 "small_bufsize": 8192, 00:27:38.773 "large_bufsize": 135168 00:27:38.773 } 00:27:38.773 } 00:27:38.773 ] 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "subsystem": "sock", 00:27:38.773 "config": [ 00:27:38.773 { 00:27:38.773 "method": "sock_set_default_impl", 00:27:38.773 "params": { 00:27:38.773 "impl_name": "posix" 00:27:38.773 } 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "method": "sock_impl_set_options", 00:27:38.773 "params": { 00:27:38.773 "impl_name": "ssl", 00:27:38.773 "recv_buf_size": 4096, 00:27:38.773 "send_buf_size": 4096, 00:27:38.773 "enable_recv_pipe": true, 00:27:38.773 "enable_quickack": false, 00:27:38.773 "enable_placement_id": 0, 00:27:38.773 "enable_zerocopy_send_server": true, 00:27:38.773 "enable_zerocopy_send_client": false, 00:27:38.773 "zerocopy_threshold": 0, 00:27:38.773 "tls_version": 0, 00:27:38.773 "enable_ktls": false 00:27:38.773 } 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "method": "sock_impl_set_options", 00:27:38.773 "params": { 00:27:38.773 "impl_name": "posix", 00:27:38.773 "recv_buf_size": 2097152, 00:27:38.773 "send_buf_size": 2097152, 00:27:38.773 "enable_recv_pipe": true, 00:27:38.773 "enable_quickack": false, 00:27:38.773 "enable_placement_id": 0, 00:27:38.773 "enable_zerocopy_send_server": true, 00:27:38.773 "enable_zerocopy_send_client": false, 00:27:38.773 "zerocopy_threshold": 0, 00:27:38.773 "tls_version": 0, 00:27:38.773 "enable_ktls": false 00:27:38.773 } 00:27:38.773 } 00:27:38.773 ] 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "subsystem": "vmd", 00:27:38.773 "config": [] 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "subsystem": "accel", 00:27:38.773 "config": [ 00:27:38.773 { 00:27:38.773 "method": "accel_set_options", 00:27:38.773 "params": { 00:27:38.773 "small_cache_size": 128, 00:27:38.773 "large_cache_size": 16, 00:27:38.773 "task_count": 2048, 00:27:38.773 "sequence_count": 2048, 00:27:38.773 "buf_count": 2048 00:27:38.773 } 00:27:38.773 } 00:27:38.773 ] 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "subsystem": "bdev", 00:27:38.773 "config": [ 00:27:38.773 { 00:27:38.773 "method": "bdev_set_options", 00:27:38.773 "params": { 00:27:38.773 "bdev_io_pool_size": 65535, 00:27:38.773 "bdev_io_cache_size": 256, 00:27:38.773 "bdev_auto_examine": true, 00:27:38.773 "iobuf_small_cache_size": 128, 00:27:38.773 "iobuf_large_cache_size": 16 00:27:38.773 } 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "method": "bdev_raid_set_options", 00:27:38.773 "params": { 00:27:38.773 "process_window_size_kb": 1024 00:27:38.773 } 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "method": "bdev_iscsi_set_options", 00:27:38.773 "params": { 00:27:38.773 "timeout_sec": 30 00:27:38.773 } 00:27:38.773 }, 00:27:38.773 { 00:27:38.773 "method": "bdev_nvme_set_options", 00:27:38.773 "params": { 00:27:38.774 "action_on_timeout": "none", 00:27:38.774 "timeout_us": 0, 00:27:38.774 "timeout_admin_us": 0, 00:27:38.774 "keep_alive_timeout_ms": 10000, 00:27:38.774 "arbitration_burst": 0, 00:27:38.774 "low_priority_weight": 0, 00:27:38.774 "medium_priority_weight": 0, 00:27:38.774 "high_priority_weight": 0, 00:27:38.774 "nvme_adminq_poll_period_us": 10000, 00:27:38.774 "nvme_ioq_poll_period_us": 0, 00:27:38.774 "io_queue_requests": 512, 00:27:38.774 "delay_cmd_submit": true, 00:27:38.774 "transport_retry_count": 4, 00:27:38.774 "bdev_retry_count": 3, 00:27:38.774 "transport_ack_timeout": 0, 00:27:38.774 "ctrlr_loss_timeout_sec": 0, 00:27:38.774 "reconnect_delay_sec": 0, 00:27:38.774 "fast_io_fail_timeout_sec": 0, 00:27:38.774 "disable_auto_failback": false, 00:27:38.774 "generate_uuids": false, 00:27:38.774 "transport_tos": 0, 00:27:38.774 "nvme_error_stat": false, 00:27:38.774 "rdma_srq_size": 0, 00:27:38.774 "io_path_stat": false, 00:27:38.774 "allow_accel_sequence": false, 00:27:38.774 "rdma_max_cq_size": 0, 00:27:38.774 "rdma_cm_event_timeout_ms": 0, 00:27:38.774 "dhchap_digests": [ 00:27:38.774 "sha256", 00:27:38.774 "sha384", 00:27:38.774 "sha512" 00:27:38.774 ], 00:27:38.774 "dhchap_dhgroups": [ 00:27:38.774 "null", 00:27:38.774 "ffdhe2048", 00:27:38.774 "ffdhe3072", 00:27:38.774 "ffdhe4096", 00:27:38.774 "ffdhe6144", 00:27:38.774 "ffdhe8192" 00:27:38.774 ] 00:27:38.774 } 00:27:38.774 }, 00:27:38.774 { 00:27:38.774 "method": "bdev_nvme_attach_controller", 00:27:38.774 "params": { 00:27:38.774 "name": "nvme0", 00:27:38.774 "trtype": "TCP", 00:27:38.774 "adrfam": "IPv4", 00:27:38.774 "traddr": "127.0.0.1", 00:27:38.774 "trsvcid": "4420", 00:27:38.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.774 "prchk_reftag": false, 00:27:38.774 "prchk_guard": false, 00:27:38.774 "ctrlr_loss_timeout_sec": 0, 00:27:38.774 "reconnect_delay_sec": 0, 00:27:38.774 "fast_io_fail_timeout_sec": 0, 00:27:38.774 "psk": "key0", 00:27:38.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.774 "hdgst": false, 00:27:38.774 "ddgst": false 00:27:38.774 } 00:27:38.774 }, 00:27:38.774 { 00:27:38.774 "method": "bdev_nvme_set_hotplug", 00:27:38.774 "params": { 00:27:38.774 "period_us": 100000, 00:27:38.774 "enable": false 00:27:38.774 } 00:27:38.774 }, 00:27:38.774 { 00:27:38.774 "method": "bdev_wait_for_examine" 00:27:38.774 } 00:27:38.774 ] 00:27:38.774 }, 00:27:38.774 { 00:27:38.774 "subsystem": "nbd", 00:27:38.774 "config": [] 00:27:38.774 } 00:27:38.774 ] 00:27:38.774 }' 00:27:38.774 13:20:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.774 13:20:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:38.774 [2024-07-15 13:20:00.363963] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:27:38.774 [2024-07-15 13:20:00.364057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965487 ] 00:27:38.774 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.774 [2024-07-15 13:20:00.421523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.032 [2024-07-15 13:20:00.531934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.032 [2024-07-15 13:20:00.714358] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:39.598 13:20:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.598 13:20:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:39.598 13:20:01 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:39.598 13:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.598 13:20:01 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:39.856 13:20:01 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:39.856 13:20:01 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:39.856 13:20:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:39.856 13:20:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.856 13:20:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.856 13:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.856 13:20:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:40.114 13:20:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:40.114 13:20:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:40.114 13:20:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:40.114 13:20:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.114 13:20:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.114 13:20:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.114 13:20:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:40.679 13:20:02 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:40.679 13:20:02 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:40.679 13:20:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:40.679 13:20:02 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:40.679 13:20:02 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:40.679 13:20:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:40.679 13:20:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dHOpsyekUO /tmp/tmp.JHswQ8ZK0C 00:27:40.679 13:20:02 keyring_file -- keyring/file.sh@20 -- # killprocess 3965487 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3965487 ']' 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3965487 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3965487 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3965487' 00:27:40.679 killing process with pid 3965487 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@967 -- # kill 3965487 00:27:40.679 Received shutdown signal, test time was about 1.000000 seconds 00:27:40.679 00:27:40.679 Latency(us) 00:27:40.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.679 =================================================================================================================== 00:27:40.679 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:40.679 13:20:02 keyring_file -- common/autotest_common.sh@972 -- # wait 3965487 00:27:40.937 13:20:02 keyring_file -- keyring/file.sh@21 -- # killprocess 3964013 00:27:40.937 13:20:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3964013 ']' 00:27:40.937 13:20:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3964013 00:27:40.937 13:20:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:40.937 13:20:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.937 13:20:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3964013 00:27:41.195 13:20:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.195 13:20:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.195 13:20:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3964013' 00:27:41.195 killing process with pid 3964013 00:27:41.195 13:20:02 keyring_file -- common/autotest_common.sh@967 -- # kill 3964013 00:27:41.195 [2024-07-15 13:20:02.642154] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:41.195 13:20:02 keyring_file -- common/autotest_common.sh@972 -- # wait 3964013 00:27:41.453 00:27:41.453 real 0m14.799s 00:27:41.453 user 0m35.773s 00:27:41.453 sys 0m3.388s 00:27:41.453 13:20:03 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.453 13:20:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:41.453 ************************************ 00:27:41.453 END TEST keyring_file 00:27:41.453 ************************************ 00:27:41.453 13:20:03 -- common/autotest_common.sh@1142 -- # return 0 00:27:41.453 13:20:03 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:41.453 13:20:03 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:41.453 13:20:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:41.453 13:20:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.453 13:20:03 -- common/autotest_common.sh@10 -- # set +x 00:27:41.453 ************************************ 00:27:41.453 START TEST keyring_linux 00:27:41.453 ************************************ 00:27:41.453 13:20:03 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:41.711 * Looking for test storage... 00:27:41.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.711 13:20:03 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.711 13:20:03 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.711 13:20:03 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.711 13:20:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.711 13:20:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.711 13:20:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.711 13:20:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:41.711 13:20:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:41.711 /tmp/:spdk-test:key0 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:41.711 13:20:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:41.711 13:20:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:41.711 /tmp/:spdk-test:key1 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3965969 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:41.711 13:20:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3965969 00:27:41.711 13:20:03 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3965969 ']' 00:27:41.711 13:20:03 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.711 13:20:03 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.711 13:20:03 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.711 13:20:03 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.711 13:20:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:41.711 [2024-07-15 13:20:03.347450] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:27:41.711 [2024-07-15 13:20:03.347532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965969 ] 00:27:41.711 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.711 [2024-07-15 13:20:03.404566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.969 [2024-07-15 13:20:03.515602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:42.226 13:20:03 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 [2024-07-15 13:20:03.783657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.226 null0 00:27:42.226 [2024-07-15 13:20:03.815707] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:42.226 [2024-07-15 13:20:03.816222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 13:20:03 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:42.226 671945660 00:27:42.226 13:20:03 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:42.226 524366711 00:27:42.226 13:20:03 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3965981 00:27:42.226 13:20:03 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:42.226 13:20:03 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3965981 /var/tmp/bperf.sock 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3965981 ']' 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:42.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.226 13:20:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 [2024-07-15 13:20:03.880800] Starting SPDK v24.09-pre git sha1 417133c03 / DPDK 24.03.0 initialization... 00:27:42.226 [2024-07-15 13:20:03.880884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965981 ] 00:27:42.226 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.483 [2024-07-15 13:20:03.940830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.483 [2024-07-15 13:20:04.063432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.483 13:20:04 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.483 13:20:04 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:42.483 13:20:04 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:42.483 13:20:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:42.740 13:20:04 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:42.740 13:20:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:42.997 13:20:04 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:42.997 13:20:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:43.254 [2024-07-15 13:20:04.893091] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:43.511 nvme0n1 00:27:43.511 13:20:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:43.511 13:20:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:43.511 13:20:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:43.511 13:20:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:43.511 13:20:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:43.511 13:20:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.769 13:20:05 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:43.769 13:20:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:43.769 13:20:05 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:43.769 13:20:05 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:43.769 13:20:05 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.769 13:20:05 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:43.769 13:20:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.027 13:20:05 keyring_linux -- keyring/linux.sh@25 -- # sn=671945660 00:27:44.027 13:20:05 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:44.027 13:20:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:44.027 13:20:05 keyring_linux -- keyring/linux.sh@26 -- # [[ 671945660 == \6\7\1\9\4\5\6\6\0 ]] 00:27:44.027 13:20:05 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 671945660 00:27:44.027 13:20:05 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:44.027 13:20:05 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.027 Running I/O for 1 seconds... 00:27:44.960 00:27:44.960 Latency(us) 00:27:44.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.960 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:44.960 nvme0n1 : 1.02 4284.34 16.74 0.00 0.00 29569.18 7961.41 39612.87 00:27:44.960 =================================================================================================================== 00:27:44.960 Total : 4284.34 16.74 0.00 0.00 29569.18 7961.41 39612.87 00:27:44.960 { 00:27:44.960 "core_count": 1, 00:27:44.960 "test_results": [ 00:27:44.960 { 00:27:44.961 "job": "nvme0n1", 00:27:44.961 "test_status": "finished", 00:27:44.961 "core_mask": "0x2", 00:27:44.961 "workload": "randread", 00:27:44.961 "queue_depth": 128, 00:27:44.961 "io_size": 4096, 00:27:44.961 "runtime": 1.0216269493103027, 00:27:44.961 "io_per_second": 4284.34252422851, 00:27:44.961 "MiB_per_second": 16.735712985267618, 00:27:44.961 "fails_per_second": 0.0, 00:27:44.961 "timeout_per_second": 0.0, 00:27:44.961 "average_latency_us": 29569.17507983652, 00:27:44.961 "min_latency_us": 7961.41037037037, 00:27:44.961 "max_latency_us": 39612.87111111111 00:27:44.961 } 00:27:44.961 ] 00:27:44.961 } 00:27:44.961 13:20:06 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:44.961 13:20:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:45.219 13:20:06 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:45.219 13:20:06 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:45.219 13:20:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:45.219 13:20:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:45.219 13:20:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.219 13:20:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:45.523 13:20:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:45.523 13:20:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:45.523 13:20:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:45.523 13:20:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:45.523 13:20:07 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:45.523 13:20:07 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:45.523 13:20:07 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:45.523 13:20:07 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.523 13:20:07 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:45.523 13:20:07 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.523 13:20:07 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:45.523 13:20:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:45.781 [2024-07-15 13:20:07.359629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:45.781 [2024-07-15 13:20:07.359989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21433f0 (107): Transport endpoint is not connected 00:27:45.781 [2024-07-15 13:20:07.360978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21433f0 (9): Bad file descriptor 00:27:45.781 [2024-07-15 13:20:07.361976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.781 [2024-07-15 13:20:07.361995] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:45.781 [2024-07-15 13:20:07.362008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.781 request: 00:27:45.781 { 00:27:45.781 "name": "nvme0", 00:27:45.781 "trtype": "tcp", 00:27:45.781 "traddr": "127.0.0.1", 00:27:45.781 "adrfam": "ipv4", 00:27:45.781 "trsvcid": "4420", 00:27:45.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.781 "prchk_reftag": false, 00:27:45.781 "prchk_guard": false, 00:27:45.781 "hdgst": false, 00:27:45.781 "ddgst": false, 00:27:45.781 "psk": ":spdk-test:key1", 00:27:45.781 "method": "bdev_nvme_attach_controller", 00:27:45.781 "req_id": 1 00:27:45.781 } 00:27:45.781 Got JSON-RPC error response 00:27:45.781 response: 00:27:45.781 { 00:27:45.781 "code": -5, 00:27:45.781 "message": "Input/output error" 00:27:45.781 } 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@33 -- # sn=671945660 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 671945660 00:27:45.781 1 links removed 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@33 -- # sn=524366711 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 524366711 00:27:45.781 1 links removed 00:27:45.781 13:20:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3965981 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3965981 ']' 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3965981 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3965981 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3965981' 00:27:45.781 killing process with pid 3965981 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@967 -- # kill 3965981 00:27:45.781 Received shutdown signal, test time was about 1.000000 seconds 00:27:45.781 00:27:45.781 Latency(us) 00:27:45.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.781 =================================================================================================================== 00:27:45.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.781 13:20:07 keyring_linux -- common/autotest_common.sh@972 -- # wait 3965981 00:27:46.039 13:20:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3965969 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3965969 ']' 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3965969 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3965969 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3965969' 00:27:46.039 killing process with pid 3965969 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@967 -- # kill 3965969 00:27:46.039 13:20:07 keyring_linux -- common/autotest_common.sh@972 -- # wait 3965969 00:27:46.604 00:27:46.604 real 0m5.045s 00:27:46.604 user 0m9.318s 00:27:46.604 sys 0m1.545s 00:27:46.604 13:20:08 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.604 13:20:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:46.604 ************************************ 00:27:46.604 END TEST keyring_linux 00:27:46.604 ************************************ 00:27:46.604 13:20:08 -- common/autotest_common.sh@1142 -- # return 0 00:27:46.604 13:20:08 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:46.604 13:20:08 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:46.604 13:20:08 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:46.604 13:20:08 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:46.604 13:20:08 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:46.604 13:20:08 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:46.604 13:20:08 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:46.604 13:20:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.604 13:20:08 -- common/autotest_common.sh@10 -- # set +x 00:27:46.604 13:20:08 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:46.604 13:20:08 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:46.604 13:20:08 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:46.604 13:20:08 -- common/autotest_common.sh@10 -- # set +x 00:27:48.498 INFO: APP EXITING 00:27:48.498 INFO: killing all VMs 00:27:48.498 INFO: killing vhost app 00:27:48.498 INFO: EXIT DONE 00:27:49.429 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:49.429 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:49.429 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:49.429 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:49.429 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:49.429 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:49.429 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:49.429 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:49.429 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:49.429 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:49.429 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:49.429 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:49.688 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:49.688 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:49.688 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:49.688 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:49.688 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:51.084 Cleaning 00:27:51.084 Removing: /var/run/dpdk/spdk0/config 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:51.084 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:51.084 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:51.085 Removing: /var/run/dpdk/spdk1/config 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:51.085 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:51.085 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:51.085 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:51.085 Removing: /var/run/dpdk/spdk2/config 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:51.085 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:51.085 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:51.085 Removing: /var/run/dpdk/spdk3/config 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:51.085 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:51.085 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:51.085 Removing: /var/run/dpdk/spdk4/config 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:51.085 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:51.085 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:51.085 Removing: /dev/shm/bdev_svc_trace.1 00:27:51.085 Removing: /dev/shm/nvmf_trace.0 00:27:51.085 Removing: /dev/shm/spdk_tgt_trace.pid3704491 00:27:51.085 Removing: /var/run/dpdk/spdk0 00:27:51.085 Removing: /var/run/dpdk/spdk1 00:27:51.085 Removing: /var/run/dpdk/spdk2 00:27:51.085 Removing: /var/run/dpdk/spdk3 00:27:51.085 Removing: /var/run/dpdk/spdk4 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3702827 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3703575 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3704491 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3704930 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3705625 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3705765 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3706483 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3706524 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3706766 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3708060 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3709021 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3709289 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3709475 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3709683 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3709892 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3710149 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3710313 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3710491 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3710801 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3713154 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3713318 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3713482 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3713611 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3713922 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3714071 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3714592 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3714604 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3714896 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3714926 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3715352 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3715452 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3716079 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3716353 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3716559 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3716727 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3716748 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3716940 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3717097 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3717365 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3717532 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3717685 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3717957 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3718120 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3718277 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3718545 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3718714 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3718867 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3719139 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3719301 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3719459 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3719727 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3719893 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3720056 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3720324 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3720487 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3720761 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3720921 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3721107 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3721315 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3723374 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3749668 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3752282 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3759768 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3763064 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3765421 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3765830 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3769800 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3773775 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3773780 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3774430 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3774979 00:27:51.085 Removing: /var/run/dpdk/spdk_pid3775632 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3776159 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3776162 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3776308 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3776434 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3776445 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3777100 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3777719 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3778296 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3778698 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3778790 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3778964 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3779847 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3780569 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3786671 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3786942 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3789449 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3793140 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3795203 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3801615 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3806835 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3808116 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3808783 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3819213 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3821816 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3847163 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3850067 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3851165 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3852448 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3852577 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3852715 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3852760 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3853173 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3854504 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3855367 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3855793 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3857420 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3857969 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3858416 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3860928 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3866840 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3869599 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3873451 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3874780 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3876022 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3878570 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3880873 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3885143 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3885154 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3887924 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3888167 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3888302 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3888578 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3888698 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3891346 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3891794 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3894463 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3896318 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3899852 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3903313 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3909544 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3914645 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3914655 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3927121 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3927607 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3928069 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3928474 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3929182 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3929592 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3930133 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3930576 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3933160 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3933428 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3937215 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3937273 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3939000 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3943912 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3943928 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3947172 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3948831 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3950231 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3951009 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3952512 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3953273 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3958636 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3958947 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3959338 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3960893 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3961293 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3961575 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3964013 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3964149 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3965487 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3965969 00:27:51.344 Removing: /var/run/dpdk/spdk_pid3965981 00:27:51.344 Clean 00:27:51.602 13:20:13 -- common/autotest_common.sh@1451 -- # return 0 00:27:51.602 13:20:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:51.602 13:20:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:51.602 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:27:51.602 13:20:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:51.602 13:20:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:51.602 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:27:51.602 13:20:13 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:51.602 13:20:13 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:51.602 13:20:13 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:51.602 13:20:13 -- spdk/autotest.sh@391 -- # hash lcov 00:27:51.602 13:20:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:51.602 13:20:13 -- spdk/autotest.sh@393 -- # hostname 00:27:51.602 13:20:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:51.602 geninfo: WARNING: invalid characters removed from testname! 00:28:23.659 13:20:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:23.659 13:20:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:26.186 13:20:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:29.481 13:20:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:32.005 13:20:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:35.283 13:20:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:37.811 13:20:59 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:37.811 13:20:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.811 13:20:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:37.811 13:20:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.811 13:20:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.811 13:20:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.811 13:20:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.811 13:20:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.811 13:20:59 -- paths/export.sh@5 -- $ export PATH 00:28:37.811 13:20:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.811 13:20:59 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:37.811 13:20:59 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:37.811 13:20:59 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721042459.XXXXXX 00:28:37.811 13:20:59 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721042459.TyevG8 00:28:37.811 13:20:59 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:37.811 13:20:59 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:37.811 13:20:59 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:37.811 13:20:59 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:37.811 13:20:59 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:37.811 13:20:59 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:37.811 13:20:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:37.811 13:20:59 -- common/autotest_common.sh@10 -- $ set +x 00:28:38.070 13:20:59 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:38.070 13:20:59 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:38.070 13:20:59 -- pm/common@17 -- $ local monitor 00:28:38.070 13:20:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.070 13:20:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.070 13:20:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.070 13:20:59 -- pm/common@21 -- $ date +%s 00:28:38.070 13:20:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:38.070 13:20:59 -- pm/common@21 -- $ date +%s 00:28:38.070 13:20:59 -- pm/common@25 -- $ sleep 1 00:28:38.070 13:20:59 -- pm/common@21 -- $ date +%s 00:28:38.070 13:20:59 -- pm/common@21 -- $ date +%s 00:28:38.070 13:20:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042459 00:28:38.070 13:20:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042459 00:28:38.070 13:20:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042459 00:28:38.070 13:20:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042459 00:28:38.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042459_collect-vmstat.pm.log 00:28:38.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042459_collect-cpu-load.pm.log 00:28:38.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042459_collect-cpu-temp.pm.log 00:28:38.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042459_collect-bmc-pm.bmc.pm.log 00:28:39.006 13:21:00 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:39.006 13:21:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:39.006 13:21:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.006 13:21:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:39.006 13:21:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:39.006 13:21:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:39.006 13:21:00 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:39.006 13:21:00 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:39.006 13:21:00 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:39.006 13:21:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:39.006 13:21:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:39.006 13:21:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:39.006 13:21:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:39.006 13:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.006 13:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:39.006 13:21:00 -- pm/common@44 -- $ pid=3975691 00:28:39.006 13:21:00 -- pm/common@50 -- $ kill -TERM 3975691 00:28:39.006 13:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.006 13:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:39.006 13:21:00 -- pm/common@44 -- $ pid=3975693 00:28:39.006 13:21:00 -- pm/common@50 -- $ kill -TERM 3975693 00:28:39.006 13:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.006 13:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:39.006 13:21:00 -- pm/common@44 -- $ pid=3975695 00:28:39.006 13:21:00 -- pm/common@50 -- $ kill -TERM 3975695 00:28:39.006 13:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:39.006 13:21:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:39.006 13:21:00 -- pm/common@44 -- $ pid=3975726 00:28:39.006 13:21:00 -- pm/common@50 -- $ sudo -E kill -TERM 3975726 00:28:39.006 + [[ -n 3619037 ]] 00:28:39.006 + sudo kill 3619037 00:28:39.017 [Pipeline] } 00:28:39.036 [Pipeline] // stage 00:28:39.042 [Pipeline] } 00:28:39.061 [Pipeline] // timeout 00:28:39.067 [Pipeline] } 00:28:39.087 [Pipeline] // catchError 00:28:39.093 [Pipeline] } 00:28:39.111 [Pipeline] // wrap 00:28:39.118 [Pipeline] } 00:28:39.134 [Pipeline] // catchError 00:28:39.144 [Pipeline] stage 00:28:39.147 [Pipeline] { (Epilogue) 00:28:39.162 [Pipeline] catchError 00:28:39.164 [Pipeline] { 00:28:39.181 [Pipeline] echo 00:28:39.182 Cleanup processes 00:28:39.189 [Pipeline] sh 00:28:39.476 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.476 3975841 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:39.477 3976006 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.493 [Pipeline] sh 00:28:39.778 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:39.778 ++ grep -v 'sudo pgrep' 00:28:39.778 ++ awk '{print $1}' 00:28:39.778 + sudo kill -9 3975841 00:28:39.791 [Pipeline] sh 00:28:40.074 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:48.249 [Pipeline] sh 00:28:48.585 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:48.585 Artifacts sizes are good 00:28:48.603 [Pipeline] archiveArtifacts 00:28:48.611 Archiving artifacts 00:28:48.805 [Pipeline] sh 00:28:49.092 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:49.109 [Pipeline] cleanWs 00:28:49.120 [WS-CLEANUP] Deleting project workspace... 00:28:49.120 [WS-CLEANUP] Deferred wipeout is used... 00:28:49.128 [WS-CLEANUP] done 00:28:49.130 [Pipeline] } 00:28:49.153 [Pipeline] // catchError 00:28:49.166 [Pipeline] sh 00:28:49.455 + logger -p user.info -t JENKINS-CI 00:28:49.461 [Pipeline] } 00:28:49.471 [Pipeline] // stage 00:28:49.475 [Pipeline] } 00:28:49.486 [Pipeline] // node 00:28:49.490 [Pipeline] End of Pipeline 00:28:49.520 Finished: SUCCESS